Browsed by
Category: General

10+ factors to evaluate strict two-sided marketplaces

10+ factors to evaluate strict two-sided marketplaces

I have recently been giving a lot of thought to the product considerations in building two-sided marketplaces. A lot has been written about this topic, and some of the posts have been excellent. However, despite the plethora of articles out there, I have noticed that in all the discussions that I have had with other strong product designers, we tend to go around in circles while describing some of the key concepts and how they fit into the exact problem we are trying to solve. In other words, there isn’t a cohesive framework that lists the underlying metrics or key variables that can help a product designer quickly evaluate the merit of an idea for a two-sided marketplace.

This post by the great Bill Gurley details some of the most important considerations while evaluating marketplaces. It is a fantastic post with lots of great nuggets that reveal themselves over and over again, and I urge all who are interested in this topic to read the post and extract the learnings for themselves.

Before we get into the specifics, let’s first understand the nature of the space itself. From Wikipedia:

Two-sided markets, also called two-sided networks, are economic platforms having two distinct user groups that provide each other with network benefits.

The wikipedia article then provides examples of companies like Facebook, Match.com and eBay as two-sided marketplaces. While technically true, there is a subtle-yet-important distinction between a service like Facebook and a service like eBay when it comes to the two-sidedness of the marketplace. Specifically, it matters greatly whether the consumer of the service is also the buyer of the service. In case of eBay, the consumer and the buyer are the same person as the buyer is paying with real money in return for the value provided by the seller (supplier). In Facebook’s case, the consumer consumes information provided by suppliers (publishers) and pays for it with attention, but not money. This attention is then monetized through the Facebook platform and paid for, in real money, by the buyers (advertisers). Thus, Facebook is serving three different entities: suppliers, consumers and buyers, and can be considered to be a three-sided marketplace. This distinction matters greatly in how the incentives are structured for the supply side (adding value) and demand side (consuming value).

Since the distinction is important, but not clearly vocalized, I am going to define the term “strict two-sided marketplace” to mean two-sided marketplaces where there is an explicit monetary transaction between the consumer and the seller, and thus, the consumer is also the buyer. In short, marketplaces like eBay, Uber and AirBnb, and not like Facebook, Google or even Medium.

In this article, Ben Thompson talks specifically about innovation, Apple and Clayton Christensen, but the core point of his thesis is that it matters greatly whether the consumer and the buyer are the same versus not. The main point here is that the distinction is vastly important, and as such, it manifests itself greatly in the specific actions that designers must take to solve the two most important considerations in marketplace design — viz. trust and liquidity.

The authoritative article on building trust in such marketplaces is written by Anand Iyer and is available here. Anand has done such a good job that everyone who is thinking about building marketplaces must read that article at least a few times to grok the fine points. Most people intuitively understand the importance of trust and as consumers, we all make decisions based on trust (or lack thereof) when we decide whether or not to consume products and services. Product designers will thus do well to keep the idea of trust first and foremost in their minds. Ultimately, the key question behind building trust is whether or not the marketplace delivers on the promise of the use case for the consumer in a consistent, timely and predictable manner. To make good on this promise, the marketplace must provide the right set of tools for the supplier and enable them to deliver on said promise. This leads us directly to the other important consideration: liquidity.

I am going to focus the rest of this essay on specific considerations in evaluating a strict two-sided marketplace and understanding the key variables that can expand or shrink the total addressable market (TAM). Some of these can be controlled, while others are dictated by the nature of the goods/services being exchanged.

Evaluating Marketplaces

As mentioned above, Bill Gurley does a fantastic job in detailing some of the key considerations while evaluating marketplaces. The key point is that a lot of these variables are dictated by the nature of the goods/services being exchanged and product designers have very little control over them. However, while thinking through the design and specific decisions, one can target the marketplace to have the most chance of success in building liquidity.

To be comprehensive, I am going to list all the points made by Bill Gurley in his post, but am also going to add a few others that I think that he missed. While possibly less important for the kind of marketplaces that he had in mind, they keep coming up whenever I have discussions or am thinking about newer ideas. Here are the ones listed by Bill, in order, with my own thoughts for each of them:

  1. New experience versus Status Quo
    – whether or not the experience is a significant improvement over current use case. For example, the Uber experience is vastly better than that of a taxi.
  2. Economic Advantage versus Status Quo
    – To me, this advantage must be on the demand side. i.e. whether the marketplace offers a cheaper alternative. Again, UberX is cheaper than a taxi for comparable distances.
  3. Opportunity for technology to add value
    – For most discussions, this is a given. If this does not exist, the rest of the discussion is usually moot.
  4. High Fragmentation
    – Here is one where I disagree with Bill’s thesis, and subscribe more to Ben Thompson’s Aggregation Theory. Bill makes the point that high fragmentation is good as it provides easier entry into the market with less resistance from incumbents. I would argue that this was true when the suppliers still had a lot of control over distribution. But the internet has driven the marginal cost of distribution to zero, and as such, if the other factors are in favor, then marketplaces can, and will, take on concentrated incumbent suppliers and modularize them. In other words, high fragmentation is not necessary if the marketplace does a good job of aggregating demand. Even in highly controlled supplier markets, there are examples of hacking this kind of supply liquidity, and while initially challenging, these can deliver great results over the long run (e.g. Netflix).
  5. Friction of supplier sign-up
    – Again, this is a lot more tactical. Whether there is low or high friction, there are different strategies that can solve either problem. Bill does correctly surmise that the critical part is not supplier aggregation, but demand aggregation.
  6. Size of the market opportunity (TAM analysis)
    – Obviously, bigger is better. But yeah, no vanity here. Examine the TAM with optimism and paranoia.
  7. Expand the market
    – This is one of the points that I am yet to grasp in its entirety. While doing TAM analysis, one has to look at the factors today and what part of the market is addressable by the addition of the new service. The analysis to figure out new use cases and whether the marketplace expands the market is much harder to do, but also captures a lot of the value. Most entrepreneurs come up with far-fetched scenarios that expand their market. A realistic yet optimistic analysis is very difficult, but separates the good entrepreneurs, product designers and venture capitalists from everyone else.
  8. Frequency of transaction
    – Obviously, higher frequency is better. The flip side is to consider how low can the frequency get before the marketplace starts losing value to the consumer. In other words, at what frequency will the consumer go back to the status quo? For example, if the AirBnb experience was only marginally better and cheaper than comparable hotels, then demand stickiness would be a much tougher problem to solve because the frequency of transaction is very low.
  9. Payment flow
    – Excellent point that if the marketplace is part of the payment flow, then it can dictate some terms of commissions (and also, as we will see, liquidity). Some marketplaces simply match buyers and sellers, while the transaction takes place outside the marketplace (e.g. autos). In this respect, one must follow design principles for aggregation theory. In general, strict two-sided marketplaces have a lot more opportunity to be part of the payment flow, unlike the aforementioned three-sided ones. For example, Yelp is not part of the payment flow, and as such even though it is widely used to make consumer choices with high frequency of transaction + high average cost, it had to define itself as a slightly different marketplace where the supplier is also the advertiser/buyer.
  10. Network effects
    – Most successful marketplaces have some kind of network effects, but they are not all the same. All strict two-sided marketplaces have network effects on the supply-side where greater demand increases incentives for supplier (higher utilization and thus lower cost for services-based supply, or economies of scale for goods-based supply).

The above list is excellent. In addition, I keep coming back to a few more:

  1. Cumulative nature of supply
    – It matters greatly whether the supply is cumulative in nature, or is transient. There is always some ingress and egress of supply as new suppliers come in and old ones go out, some marketplaces have a more-or-less cumulative supply. For example, a house listed on AirBnb is much more likely to remain there. To some extent, that is less true for an Uber driver. For marketplaces selling goods, like Etsy, each seller listed adds greatly to the cumulative nature. This has benefits for all parties involved: buyer, seller and the marketplace.
  2. Size of transaction
    – Yes, this has to go in concert with the frequency of transaction. All things being equal, bigger is better. The key question though is how these two variables relate to each other. For example, AirBnb has low frequency, but a large size of transaction (rumored to be ~$400 on average). Thus, even if the AirBnb transaction occurs once a year, the fee collected (~$58) is very high ARPU (average revenue per user). The rest of the $400 is passed on to the seller. Uber, on the other hand, has a lower average transaction size, but the frequency is much higher. If the product of the two numbers is the same, higher frequency trumps larger size, as it also begets brand loyalty. Consumers are less likely to switch if they are already using one marketplace more often. On the other hand, if the product of size times the frequency is larger because of a larger average transaction size, then the marketplace must be designed with that in mind.
  3. Temporal nature of supply
    – Is the supply transient in nature, or mostly there? This is slightly different from the cumulative nature of supply. For example, a restaurant added to a marketplace is cumulative, but events at the same restaurant are transient in nature. Once the event has already transpired, it does not add any value for future liquidity. All transactions are ephemeral in nature, but if the underlying supply is transient itself, then it shrinks the marketplace liquidity. The marketplace designers will then have to keep adding new supply constantly, and this presents a significant challenge while balancing liquidity and demand.
  4. Local or global nature of the transaction
    – The Internet has done a wonderful job in bringing information about global supply to local areas. AirBnb is a great example of this, and even though supply is local in nature, the demand it spurs is global. Uber, on the other hand, is very much of a local player, and as such, can be looked at as the collection of many different local marketplaces. Yes, there is crossover where some consumers (riders) will travel and use Uber in other markets, but the large part of the design has to account for both supply and demand locally. This is why the marketplace dynamics for Uber is different in each city. As the product designer thinks about this issue, it is important to keep in mind that a strictly local marketplace shrinks the potential supply for demand. Over time, it can be built and exploited, but the job is much harder than building the whole network at once. Thus, for purely local marketplaces, a city-by-city (also called market-by-market) approach usually works best, even if it all happens at the same time.
  5. Single player mode
    – This is a really important concept from the demand side. Imagine Uber where there is no other rider in the system. The service will still be immensely useful to a single rider. In fact, a single rider does not care whether other riders are present in the system as long as the supply is liquid enough. So, while network effects will make the system more powerful, this concept allows the marketplace designer to employ growth techniques for demand that are somewhat independent of the supply-side. In the absence of the single player mode, the designer has to balance both supply and demand together, a much more challenging prospect.
  6. Buyer/Seller ratio
    – This is a rarely talked about concept, but for increased utilization of supply, the lower this ratio, the better for the marketplace. Lower ratio means that there are many more buyers, and thus, each seller has to service a number of buyers. This increases the utilization of supply, and incentivizes sellers to make marketplace supply their primary occupation. If sellers are professionally tied to the marketplace for their income, they are much less likely to leave the marketplace. Said another way and with the lens of aggregation theory, as the marketplace controls demand, the supply side loses power and gets modularized and interchangeable.
  7. Cross-over between buyer and seller
    – In some marketplaces, increased activity by the buyer also moves the buyer towards becoming a seller. For example, people who buy things on eBay also tend to sell things on eBay. Sure, eBay has professional sellers that forms the backbone of the supply, but the initial growth in liquidity comes from these very important buyers who tend to be sellers. However, even in these circumstances, the considerations for the buyers and the sellers are different, and the marketplace design should keep that in mind. Some marketplaces are just not suited for this. e.g. Uber drivers are actually less likely to be riders.

I realize that this list is far from complete, but I find myself coming back to these considerations in multiple discussions.

Product design is highly subjective by nature, but our job as product managers/designers/thinkers is to balance the different forces and come up with valid frameworks that enable predictable action.

In future essays, I want to tackle the question of how to systematically add and improve liquidity if the TAM is big-enough. And once there is enough liquidity in the marketplace, how to employ specific marketing techniques for scale. And while the considerations above are meaningful in my own analysis, there is a lot of potential to formulate a theory and understand/quantify the relative value of the different variables.

If you have any questions, or comments, you can find me at anuragmjain — at — gmail.com

Basics you wanted to know about big data, machine learning and artificial intelligence, but were afraid to ask

Basics you wanted to know about big data, machine learning and artificial intelligence, but were afraid to ask

Over the past few months, the field of artificial intelligence has been exploding. A lot of people I meet here in the bay area talk about it constantly, and they try to come up with different use cases for artificial intelligence. It is increasingly clear that artificial intelligence will be a major toolset of the future. I believe it will exceed the status of a toolset and find an evolutionary path of its own.

But the more conversations I have around this, the more definitions I hear around the different buzzwords. What is artificial intelligence? Is it the same as machine learning? Some people throw around words like Natural Language Processing (NLP). What is that? Most predictive analytics companies claim to be using some form of artificial intelligence. Are they really all using cutting-edge technologies? If not, what are they using? And how does it help or hurt them when competing with other companies who, in fact, are using some of the cutting edge tools?

Over the series of the next few blog posts, we plan to illuminate the key differences between what people are doing, how to think about machine learning and AI in your product, and how to prepare your company to be competitive in the future that is inevitable.

But first, some definitions. Keep in mind that these blog posts are written from the point of view of practitioners and not researchers (although we work hand in glove with researchers). Thus, we won’t get super technical about any of these items. There are people far smarter and far more articulate who have done an excellent job of demystifying the science behind all of these concepts. We will do a blog post compiling some of our favorite resources very soon. For now, we will focus on the practical aspects of the field and how company executives should be thinking about the best ways to use data to put their companies on the far end of the competitive spectrum.

 

 

Ok, enough chatter. On to some loosey-goosey definitions, along with a recap of some of the basics:

What is big data?
So, you have heard the term big data and understand that it is a large amount of data that could be structured or unstructured. As you know, it is important because there are meanings, patterns and predictive behavior hidden in the large swath of data. However, traditional computational and data processing techniques that we all grew up studying just don’t solve the problem of understanding the meaning behind such large amounts of data. Firstly, this large amount of data needs to be stored across hundreds (or thousands) or servers. Then, it has to be presented in a format where the data can be analyzed. Traditional techniques of analyzing massive amounts of data in one go just don’t work. This is the main problem that traditional analysts have. They just can’t hold and analyze like they did in the past. Along with the proliferation of the cloud, newer big data techniques can help wrangle this large amount of data much more easily. This makes it easier to handle ‘big data’. Which brings us to the next question:

How do we make sense of all this data? 
To make sense of the data, we first have to present it in a format that any algorithm can consume. The next part is tweaking those algorithms to get a desired understanding. Machine learning is one of the newer techniques that can help understand the patterns in the data without an analyst starting from a specific viewpoint. Actually, machine learning techniques have been around for decades (yes, decades). But in 2012, there was a major breakthrough that was able to get a phenomenal result in identifying handwritten digits. The technique that the researchers used came to be known as deep learning. Researchers, and then practitioners, all over the world rejoiced, and felt that this was the new silver bullet to solve the world’s data analysis problems. Coupled with the fact that everyone was generating vast amounts of data, researchers felt more confident that this technique + big data could find hidden meanings which were more difficult to find in the decades past. It looks like their excitement was well placed. Great progress has been done in this area, and the progress continues to surprise even the most ardent fans of the techniques.

So, machine learning lets computers find meanings in data?
In short, yes. But that’s a very broad definition. More specifically, machine learning refers to the idea of letting these new algorithms and techniques find meaning in data without starting from an analyst’s viewpoint. Let me give you an example. With data analysis, a typical analyst will come up with theories on how the data could be related and then validate those theories. Most of the time, their hypothesis proves incorrect, but not without giving them more information so that they can come up with a new hypothesis. Machine learning techniques turn this approach over on its head. By letting machines discover patterns in the data, they can be used to find highly complex relationships within the data which cannot be adequately modeled by the best of mathematicians. Exactly how they do this is the subject of another blog post, where we will cover basic concepts like supervised learning and unsupervised learning, and when each one makes sense. For now, let’s keep in mind that the machine learning techniques are more powerful and try to uncover patterns which the machine learning theorist or practitioner need not be aware of before the process begins.

Ok, I get it. Can machine learning be applied to ‘small data’?
Yes. It is not necessary that a large amount of data be present for the techniques to be successful. The simple way to think is whether the data contains enough information and structure to make some sense. For example, a list of 100 houses in a zipcode with prices and square footage will give one a very good idea how to price a new house given it’s square footage. However, if the data only contained house prices and the number of windows in the house, then that’s not a good indicator. The best way to think is that if a human can be trained to make some sense of the data without relying on other knowledge, then a machine can probably do so as well.

So, what is this artificial intelligence?
Artificial intelligence is the most difficult one to define. I tried to read the definition on Wikipedia, and it gave me a headache. Everyone defines it differently, but in general it refers to the idea of computers and algorithms doing things that were earlier considered the dominion of humans. For example, understanding complex voice commands, sentences and phrases was considered near impossible about a decade ago, and yet, computers are able to do just that. Similarly, reading, characterizing and understanding handwritten signs, or the landscape while driving a car are all things that seem fantastic for a machine to be able to do. Ultimately, under the covers, it is a matter of getting a lot of information from various sources (multiple cameras and all kinds of sensors) and correlating it in a manner which is similar to how we make sense of the data. Hence, the term ‘artificial intelligence’ — there is a lot more complex “solving” and “learning” happening. Also, it sounds cool!

I hope the above gives you some sense of the world of machine learning and artificial intelligence. Over the next few posts, we will go a little deeper into each topic, while keeping in mind that our target audience are industry executives who should be prepared for the changes which are already occurring in their industries.

If you have any questions, feel free to email me or find me on linkedin. I’d love to hear from you if I can help you or your team with machine learning.

A march towards quality

A march towards quality

So, I have been having discussions with a very good friend of mine over the past many months. While I have bounced around ideas on how to write about specific topics, I have never been able to actually start. Today, he challenged me to write something anyway, no matter how shitty it is. After racking my brains on how to come up with the perfect subject matter, I was reminded of this story from the book Art & Fear. Ever since I read it, I have been boring all my friends and everyone I meet with it. It is a famous parable in the book, and goes something like this:

The ceramics teacher announced on opening day that he was dividing the class into two groups. All those on the left side of the studio, he said, would be graded solely on the quantity of work they produced, all those on the right solely on its quality.

His procedure was simple: on the final day of class he would bring in his bathroom scales and weigh the work of the “quantity” group: fifty pound of pots rated an “A”, forty pounds a “B”, and so on. Those being graded on “quality”, however, needed to produce only one pot – albeit a perfect one – to get an “A”.

Well, came grading time and a curious fact emerged: the works of highest quality were all produced by the group being graded for quantity. It seems that while the “quantity” group was busily churning out piles of work – and learning from their mistakes – the “quality” group had sat theorizing about perfection, and in the end had little more to show for their efforts than grandiose theories and a pile of dead clay.

While the implications of the story are obvious, and most of you are nodding your head vigorously while applauding the simplicity of the message, it is worthwhile to delve a little deeper. The story is particularly powerful because our initial reaction is one of amazement and awe. However, the awe subsides very quickly as comprehension takes hold and it is immediately obvious from our own experiences that this must be true.

Then, why is our initial reaction one of awe? I believe it is primarily because we are less inclined to believe that a random group of students can iteratively learn without having any outside impetus to do so (since they were tasked only with coming up with quantity, not quality). However, when the discussion becomes personal, we can feel the truth of the story because clearly, that is exactly what we would do in that situation. I certainly would like to think so.

In any case, the clear message is that it’s far superior to start from a horrible iteration and work your way up to quality instead of sitting, reading and theorizing about the right way to get there. I must admit, I have been plenty guilty of the latter. Actually, in software development, quantity does inform quality. Over time, the most prolific developers end up writing the best code.

So, I am going to start small and get back into the habit of writing random thoughts on this blog. If, and when, the material becomes post-worthy for other places, I might actually publish. But for now, it is all quantity with absolute disregard towards quality.

Onward!

 

 

Remembering Steve Jobs

Remembering Steve Jobs

Much has been written about Steve Jobs in the last couple of days. People throughout the world have showered their love and affection for the public figure of Steve Jobs. Apple’s products showed that their creator must be a force to reckon with. For people here in the valley, especially enchanted with products, design and innovation, Steve was a legend way before his death.

I never met the man, and only knew of him through his products, and his public appearances and quotes. Most people, like me, knew Steve Jobs only this way. So why is it that we all feel so overwhelmingly sad at his demise? Is it because he was a great man of our times who brought the future to the present? Is it because he showed us what beautiful products look like? Is it because that he reminded us that true beauty is, in fact, universal? Is it because we all know that products in the future will just not be the same anymore without the guiding force of superior design?

It is all of the above and much much more. Most people say that Steve’s brilliance lies in the fact that he really knew what customers wanted. I think that’s totally incorrect. I don’t think he cared about what other people wanted. But I think he did care immensely himself. He was just trying to make the best thing he could given the technology of our times. Steve wanted more, because he knew that more was possible. The main difference between him and other innovators has been that he went much further in trying to solve a problem. Most people would stop at a very very very good product. But not Steve. He wasn’t trying to be perfect, but he was making the best thing that appealed to him. And because he was incessant in his quest to find the thing that truly appealed to him, it actually appealed to the masses as well.

Yes, Steve inspired us all to make beautiful things. And yes he did inspire us to design things well. But the reason we really miss him is because he showed us that we should do something wonderful that we want to do, and not because someone else demands it. Steve showed us a way to live our life which breaks all traditional and cultural norms. He showed us that we should stop worrying about what other people (including customers) think, and instead worry about whether we are being true to ourselves in building what we can. He showed us that it’s important to have taste, good or bad, and bring it to what we do. He showed us that good enough just isn’t good enough.

Steve taught us that it’s important to do the best you can with whatever is available and not give up too soon. Not because we should satisfy other people’s cravings, but for our own sake. Because, after all, this is our life we are talking about. Noone else teaches us that lesson. Only the people, like Steve, who have lived their life by that code can inspire us to attempt to do the same. And they teach us that the purpose of our life is not to earn great riches or to have huge impact or build things that others want, but simply to do something wonderful.

And for a reminder of that lesson, I thank you Steve, and may you rest in peace.

Why PayPal doesn’t quite work for me

Why PayPal doesn’t quite work for me

A few days ago, I bought some software for a friend. He lives in Los Angeles, so payment was obviously an issue. I had the software shipped to him directly from the vendor and paid for it myself (it had to be my credit card). We both thought that using PayPal would be super easy, and I would get payment easily enough.

I remembered that many months ago another friend of mine had sent me an invoice and I got charged a fee. Not wanting to pay any fee, I decided to send the invoice to this friend as well, but I still got charged a hefty fee. On a $96.08 transaction, PayPal charged me a fee of $3.09. This is ridiculous. The worst part is that it is not clear at all where and who gets charged a fee. Only recently have PayPal made their fee structure clear and easy to find on their website.

In either case, I don’t think I will be using PayPal anytime soon to ask my friends for money they owe me. A cheque is not too much trouble. It arrives in the mail, and I can deposit it the next time I go to withdraw some cash from my nearby ATM machine.

Go ahead, cancel your AT&T contract

Go ahead, cancel your AT&T contract

AT&T sent me a notice detailing that the old ETF is no longer in effect. If you remember, AT&T charges an early termination fee (ETF) of $175 if one cancels the 1-year or 2-year contract with them which was initially used to get a larger subsidy on the cost of the phone. For people who have paid full price for the phone, like me, this was never a good idea and one of the reasons AT&T managed to keep their tentacles hooked into me.

According to the notice, AT&T wireless and Cingular customers who’ve had service any time after January 1, 1998 may be in line for their share of a $18 million cash and “cash benefit” settlement. AT&T also notes that this settlement is for their “old” ETF program, and not for the new pro-rated ETFs that they introduced in 2008.

“We strongly deny any wrongdoing, and no court has found AT&T Mobility committed any wrongdoing regarding these fees. However, we have agreed to settle to avoid the burden and cost of further litigation.

It’s important to note that the litigation involves old early termination fee policies of the old AT&T Wireless and Cingular. In 2008 we introduced a new, more flexible early termination fee policy, in which we pro-rate the ETF if you are a new or renewing wireless customer who enters a one- or two-year service agreement.”

With AT&T’s shoddy coverage in the San Francisco area, I am seriously considering breaking my contract and going towards greener pastures.

GMail, why can’t I regain full control of my account after being hacked?

GMail, why can’t I regain full control of my account after being hacked?

So my gmail account got hacked. Yes, painful, but also very instructive. Firstly, I still do not know exactly how it got hacked. I don’t use any public computer. In fact, I haven’t used any other machine besides my own laptop (Mac) and my iPhone in a very very long time. I don’t sign up for any offers on the internet, and do not install any crap software. Besides, isn’t the Mac supposed to be very safe?

Anyhow, I got hacked and that’s that. The hacker then started sending emails from my account to everyone I have ever communicated with asking for money. The amazing thing about this story is that all the emails sent are very very impersonal, don’t salute the recipient in any way, and are full of grammatical and spelling errors. Yet, the content matter is so sensational (being robbed at gunpoint in some foreign country), that everyone gets worried about my safety. If I received a similar letter, I wouldn’t sit and analyze this, and would fall for it as well. So far, they have preyed on the emotions of their victims through me as the medium.

During the course of trying to get my account back, I ran into some of the issues and got a sneak peak about exactly how these hackers then try to exploit the system, GMail in particular. I had my Yahoo account set up as the secondary email in case of emergencies, or verification. The hacker was quick to change the secondary account first. Gmail has a system of sending verification to a mobile device. This too got changed quickly to some mobile number in Nigeria. During this time, I tried in vain to gain control by asking Gmail to reset my password and send me the password reset code. Gmail only shows that they sent the reset code to xxxx@yahoo.com, but not the username at the yahoo.com address. So while I was waiting for my password reset code to arrive at my yahoo.com address, the hacker was seeing password reset requests come in to the temporary yahoo address he had set up. I am sure he was laughing at my stupidity and the fact that I sent in multiple requests when the first one failed.

Ok, I was baffled. So I went through the GMail system to report that my account has been compromised. I had to fill in multiple details, including when my account was first started, and the invitation code I used to join (if at all). Obviously, I didnt have any of these, but I made best guesses, and lo and behold, GMail returned my account back to me. I was able to reset my password, and rejoice.

Alas, too soon!

After proclaiming victory, I tried to send a few emails, etc. and it all worked fine. I promptly sent emails to a huge list of people warning them that I had been hacked, and to ignore requests from me for money. There were a few people in the list that I wouldnt have minded getting some money from, but this had to be done.

The hacker, during this time, had very smartly set up a forwarding rule so that he was getting all the emails that I was receiving on my account. This, by itself, is not much. But here comes the most amazing part of how Google engineers missed seeing this as a threat, but these hackers have managed to exploit it. Before I explain what the flaw is, a little diversion into the background.

GMail allows one account to send emails while masquerading as another account. This was designed primarily so that I can have multiple gmail accounts (including Google Apps email accounts like I have a @gmail.com and a @gigzee.com account), and still be able to use one primary account and send emails from it for all the different accounts. Great idea, and I love it. All this takes to set up is a simple verification email. So, say you have a1@gmail.com and b2@gmail.com. If you want b2@gmail.com to be able to send emails and still show up as a1@gmail.com, you can go to your settings, set up another email address and this will send a verification email to a1@gmail.com. After clicking on the verification link and entering the code, b2@gmail.com can now send as a1@gmail.com. If you delete the verification email from the a1 account, there is NOTHING in the settings or account panel of a1@gmail.com that shows that b2 is still sending emails as a1.

This is exactly what the hacker has done. He has set up another gmail account, and is sending emails on behalf of my gmail account. During this time, he is also receiving the auto-forwarded emails of my account. So even though I have changed my password, and declared victory, he can still receive and send emails just as if he were in full control.

So, step 1, I removed the forwarding rule. Ok, now he cannot get any emails sent to me. Yayyy!
What about his ability to send emails? Turns out that there isn’t any additional verification after the initial verification. What’s more, there is no indication anywhere on my account settings that shows me how many other people can send emails as me. This is terrible. So while I have full control with brand new passwords, the hacker can simply keep sending emails to anyone he likes pretending to be me, ruining my reputation in the process.

GMail – I am not sure how you could have missed this in one of your threat model analyses. But please add an option in account settings where I can control who all can send emails pretending to be me. Meanwhile, the hacker has a field day in sending emails from my account, and can do so as and when he pleases. I am writing a letter to GMail as well so that they can fix this, but if you get any email from me asking for money (personal or not), please don’t wire it to somewhere in Europe. Now, if you want to hand over some cash to me in person, feel free to give me a call!

How many people do you know who have never got a speeding ticket?

How many people do you know who have never got a speeding ticket?

speeding

When I was in graduate school, one of my roommates asserted that he planned to go through life without getting a speeding ticket. I met him a couple of months ago, and I asked him how he was doing on that plan. He had got one speeding ticket! Amazingly, even though I had never made my intentions public, I also planned to go through life without getting a parking ticket. Yet, despite my best intentions, I too got a ticket a few years ago.

I know what you are thinking. Clearly, both my friend and I should have been more careful sticking to the posted signs on the limits. And for the most part, we do. The trouble is that there is no well-defined boundary about where we are breaking the law. It is more like a gray area. If the posted speed limit is 60 mph, then the boundary lies at 60mph. If you go faster than 60mph, you are speeding and must get a parking ticket. The trouble is that most of the traffic travels at a speed greater than 60 (typically somewhere between 65 and 70mph – the “accepted” 5-10mph above the limit), making it an illegal-but-acceptable zone of 5-10mph.

This zone exists for a variety of reasons. First and foremost is the technicality of the margin of error of speed checking guns or various other devices, the calibration errors. etc. Smart lawyers try to get their clients off based on such technicalities. So, the police try to catch people outside this range of error. But this is not always so. Sometimes, the cops will catch you even if you are going only 4 mph above the posted limit (say 64 in a 60 zone). Why this discrepancy?

One way to deal with this is to always follow the posted speed limit. If the general traffic is going faster, then not only will you be the slowest car on the road, but you might also be holding up some traffic, thereby causing a more potentially dangerous situation. Furthermore, if you were following the posted limit, why should you be the one to be punished by having to spend more time for the same task that everyone else does in a shorter period of time? Law-abiding citizens should be rewarded, and not disadvantaged. If someone is traveling above the posted limit, it is the duty of the police officer in charge to issue a ticket. Every time the police officer neglects to perform his duty, he is abetting a misdoing.

I have been asking all my friends who have been driving for more than 5 years, and so far, every single one has got a speeding ticket. Some people are habitual speeders, but even the more cautious, gentler drivers have managed to be caught speeding at least once.

Do you know anyone who has been driving for a long time without ever getting a speeding ticket? Please add it in the comments, I would love to know.

Get Adobe Flash playerPlugin by wpburn.com wordpress themes