5 tips for product managers joining a new team or company during Covid-19

Joining a new team, let alone company, is hard during normal times. Covid-19 has made that challenging time that much more difficult because even infrequent in-person meetings have now been relegated to video chats via your company’s favorite video chat application. In that respect, even local workers have become “remote”, and all of us have become talking heads on video chat.

Under normal circumstances, a new product manager (including leads and directors) can sit in various meetings and sit together with folks from the new team. Unfortunately, an incoming product manager is expected to hit the ground running despite this new normal.

Here are 5 areas to concentrate on for building relationships, credibility and expertise in a hurry:

  1. Build relationships
    Perhaps the most important part of a product manager’s job is to build strong relationships with the new team, supporting teams and any partners. With video chat, all of us have now become talking heads. Everyone is dealing with the pandemic in their own way, and the net result is that meetings have now tended to become transactional and not relationship-based.

    In my opinion, most of the magic for a product manager happens during those extra bits before and after the actual meeting, or in the hallway, or at lunch. Additionally, I have always built strong, long-lasting relationships during travel when the team bonds on a shared trip.

    With this view in mind, the best thing for an incoming product manager to do is to concentrate on building relationships. Part of this means going out of your way to do things that you wouldn’t normally do in a new job. Some ideas include setting up an info session (trust me, everyone is feeling out of the loop), bringing teams together, and frankly, just listening with an open mind.

    The small chit-chat at the beginning of the meeting is indispensable and perhaps more important than ever. Strong relationships will provide you the runway needed to do your job effectively, especially when the decisions are unclear and the going gets tougher.
  2. Build credibility
    Building credibility in a new team is hard. I don’t mean expertise in the job, but really credibility as a person of integrity – what the British call “sound”. To be sound means that you know how to behave, what needs to be done in each circumstance, and in general, you won’t let the larger team down.

    No one knows the awesome work that you have done before you got to the new team, and worse, no one cares. Humans are adept at building trust and credibility through body language, water-cooler talk and shared experiences like lunches and trips, but that luxury is gone during Covid-19.

    So, concentrate on building credibility by communicating your principles more often than you would normally do. One tip is to over-communicate even at the risk of sounding pedantic. This is a tough balance to hit in a new team, and everyone wants to communicate just so. But keep in mind that the penalty for over-communicating is less than the penalty for under-communicating. Hence, my advice is to come out and say the obvious. You will be surprised how often others are not aware of the “obvious”.

    Building credibility is very important for people managers because everyone is naturally always more worried about their own job and their place in the team. Communicating principles and beliefs early and often will allow you to form a basis for credible decisions that your team will appreciate.
  3. Build product expertise
    Ah, this is what you got to the new team to do – product management. As the title of the job itself suggests, you cannot do your job effectively unless your new product becomes second nature to you. Sadly, any product worth its salt will take you months to truly understand. There is also a lot written about this topic, so I won’t get into the details here. But suffice it to say that you should spend a whole lot of time understanding the nuances of product decisions, the history behind why things were done a particular way, and of course, why users care about your product.

    One tip is to ask different people the same questions – the answers will sometimes vary and will give you a good sense of the nuances.
  4. Build data expertise
    Much has been written about this topic as well. I have personally found that being comfortable with the data takes a whole lot of effort, but it is the single best investment of your time (after building the relationships).

    A strong data analysis muscle will pay dividends for months and years, but nothing makes a new PM stand out as much as turning a discussion towards data, especially when folks don’t expect you to know the details.
  5. Build the vernacular
    This is perhaps the hardest part in a new team. It always takes time to build the vernacular, but sadly, you don’t have the luxury of finding an engineer after a meeting to sit down and understand the difference between two similar-sounding but different-meaning words.

    Effective communication is the domain of a strong PM, and nothing derails a meeting like a wrong word that is misunderstood differently by different people in the meeting.

There you have it. These are my early observations about 5 areas to concentrate on when joining a new team. While my view is heavily influenced by product management, this applies to any new job and most of it is obvious to any seasoned professional. Yet, I have found that even seasons professionals can forget the importance of building the relationships and data expertise, and sometimes concentrate more on vision and roadmaps. Those are important, but as I always say – “One must stand on high moral ground first before taking long leaps”.

Have fun in the new team. It is always an exciting time to learn new things and put your stamp on them.

Not all marketplaces can increase prices after winning the market. Can yours?

Any marketplace facilitates the exchange of value between the “producer (of the value) and the “consumer (of the same value). By easily matching the two sides in one place, the marketplace adds a lot of value to both.

Typically, the exchange of value is also accompanied by an exchange of cash. In such a case, the marketplace takes a small commission/tax in return for the value it adds. Let’s call the person paying with cash the “buyer”. Similarly, the party receiving the cash in the transaction will be labeled the “seller”.

To keep terminology straight, let’s recap the four members that potentially exist in the marketplace:

  • Producer: adds value into the marketplace.
  • Consumer: consumes the value from the marketplace.
  • Buyer: adds real money into the marketplace.
  • Seller: receives real money from the marketplace.

Note that each of the four parties above can be a person, group or corporation. Thus, this framework applies to all marketplaces, B2B, B2C, C2C, etc. Note also that exchange of cash is not a necessary requirement in all marketplaces.

As we shall see though, not all marketplaces are the same in their ability to reach monopoly status. Furthermore, even if some marketplaces are able to capture a significant portion of the market, they still do not have an ability to increase prices significantly.

There are three basic kinds of marketplaces based on the dynamics of the different parties involved.

Consumer = Buyer

This is the strict two-sided marketplace where the consumer is the same as the buyer, and is typically also accompanied with the fact that the producer is the same as the seller. This marketplace is the most common. Key examples are eBay, Amazon, Uber& AirBnb.

Let’s take Uber as the prime example. The drivers are producing the value of rides, and riders are consuming that value. The riders are also the buyers as they pay with cash into the marketplace. Uber takes a cut, and then passes on the rest of the money to the drivers (sellers). Since the commission taken by Uber is a strict tax on the system, the marketplace must continue to keep on adding value to keep it’s lead ahead of competitors (like Lyft). Furthermore, until they win a significant portion of the market, Uber must also keep lowering their prices across the board to aggregate demand and get more riders. See this tweet by David Sacks to understand how more rider demand leads to more drivers despite lower prices. The same logic applies to all the marketplaces in this category.

So, let’s consider what happens when Uber has a significant portion of the market and then raises prices? Unless Uber’s value in the matching algorithm is so high that it cannot be replicated by competitors, as soon as Uber increases prices to increase profitability beyond economic equilibrium, it opens the door for a competitor to come in and compete with similar value and lower prices. Thus, to maintain the lead, Uber will be better off with lower prices and a larger market share, and will not increase prices for riders. Economists call this “perfect competition”, and companies in perfect competition in the long run are both productively and allocatively efficient. In the short-run, Uber might give incentives to drivers to flock to their marketplace in order to increase supply, but in the long-run, drivers will have no power left and must compete with each other to find equilibrium in prices.

These marketplaces benefit the consumers/buyers, but producers/suppliers typically have no power, and hence, hate them. From a societal perspective, that is a good thing.

Such marketplaces can enable the exchange of value through services (Uber, Instacart, Postmates, Doordash) or products (Amazon, eBay). In case of services, sometimes there is an additional 5th party that is involved — typically the party providing the underlying product. In such cases, the marketplace can expand the underlying market, and thus, demand $$ from the underlying market itself as an additional source of revenue. Example: Instacart can get a kick-back from Whole Foods for increasing their total sales. But from a consumer perspective, Instacart will raise prices only at their own peril.

Consumer = Producer

Ah, this is the kind of marketplace that dreams are made of. In this kind of marketplace, the consumer and the producer belong to the same group. Because of this, it has the additional effect that the marketplace itself becomes the seller. Because there is only one seller, if the marketplace reaches massive scale, by definition, it becomes a monopoly.

A good example is Facebook. Since all the users are both producing and consuming content, Facebook continues to add value through network effects, new features and superior feed updates for user engagement. Facebook tries very very hard to add value and keep all the people on their marketplace. Once scale is reached, the activity itself is monetized primarily through ads — typically through the entrance of 3rd party buyers. Facebook then takes this increased activity and attracts buyers into the system. Thus, Facebook is the seller of the user attention and advertisers become the buyers. Facebook does not share this cash with anyone, certainly not the users who are both producing and consuming the value (content). Furthermore, there is no competition and hence no incentive for Facebook to decrease prices from the buyers. The only competition exists from the outside world via companies like Google, which are also similar monopolies. Economists call this “monopolistic competition”. In such cases, the two products are not interchangeable, and thus, the companies compete more on features than on price. It actually serves both companies really well to add more features and keep increasing prices. Even if all advertising were to move to only Google and Facebook, the price/click for both platforms will continue to go up without the two companies actively colluding with each other.

In general, if the buyers do not have better alternatives anywhere, they are forced to pay the prices commanded by the marketplace. Such marketplaces can quickly become monopolies and use their position to set prices. Beware though, such marketplaces are very tough to build and require massive scale to succeed. Most successful communication apps fall in this category.

On the flip side, if the total volume of transactions (exchange of value) in such marketplaces is low, buyers have many more opportunities outside of this marketplace, and thus are in a much better position to dictate prices. For example, smaller websites with a tenth of the traffic of Facebook typically earn less than one hundredth the cash.

Producer = Buyer

This kind of marketplace is very tough to build — perhaps, the toughest to build. In such cases, the marketplace has to entice the producers first. To do that, the marketplace works very very hard to build the supply first, and once the consumers are aggregated, the burden of building up the supply moves to the producers. At that point, the producers also compete with each other and thus must pay the marketplace for listing or ads. Craigslist is a great example of such marketplaces. In case of Craigslist, the producer of value (party adding the listings) also has to buy listings for special kinds of listings — homes, rentals, jobs, etc. Thus, the producer is also the buyer (not of the rental, but rather adding money into the marketplace).

Such marketplaces tend to be the hallmark of a highly fragmented producer market. Additionally, marketplaces also need to control/monopolize distribution and access to the consumers at scale. The fragmentation of producers makes it very difficult to have enough liquidity initially (think listings for Craigslist), but also adds enough competition between producers that they have to become net buyers (typically of ads) to promote themselves, primarily because of the wild fragmentation. Once built successfully, such marketplaces are the most defensible.

Yelp is an example of a hybrid of this style of marketplace and the other monopolistic one (consumer=producer). In terms of reviews, the producer and consumer belong to the same group. However, a large part of Yelp’s value comes from having complete and accurate listings, and thus, Yelp provides a lot of tools to businesses to maintain their own page and even showcase flattering reviews and delete negative ones. Because Yelp’s business model lies firmly in this style of marketplace design, they are forced to provide these toolsets to the producers of the content and cannot contend with simply selling user activity on their platform.

There are additional combinations possible, but the economics are such that they don’t play out. I have listed them below for completion.

Producer = Seller

Same as consumer=buyer above. In such cases, the consumer also belongs to the same group as the buyer. Not a different scenario.

Consumer = Seller

This is not a valid scenario, as the consumer of the value in the system cannot be selling and getting money in return. In theory, this can work where the consumer=producer and hence is adding all the value in the system. Again, theoretically, all consumers can revolt together and ask the marketplace to share part of their bounty. But, in practice, this kind of organization is not present on the consumer side, and hence, the consumers never become sellers.

Buyer = Seller

This is not a valid scenario. If buyer=seller, then there is no reason for the marketplace to exist, and the transaction takes place outside.

Well, there it is. A fun way to look at the different kinds of marketplaces, and what business models can work in each dynamic. Some of them are definitely harder to build than others. All of them want to be larger to get network effects and aggregate the consumers, but payout at the end of that road is not the same in all cases. Some of the marketplaces will reach the end and take all the available cash, while others will have to contend with a tax in the system and only take a small commission. It is very important to keep this in mind so one can accurately plan for the size it will take for the marketplace to be wildly successful like Etsy versus Instagram.

This essay is a follow-up to 10+ factors to evaluate online marketplaces.

10+ factors to evaluate strict two-sided marketplaces

I have recently been giving a lot of thought to the product considerations in building two-sided marketplaces. A lot has been written about this topic, and some of the posts have been excellent. However, despite the plethora of articles out there, I have noticed that in all the discussions that I have had with other strong product designers, we tend to go around in circles while describing some of the key concepts and how they fit into the exact problem we are trying to solve. In other words, there isn’t a cohesive framework that lists the underlying metrics or key variables that can help a product designer quickly evaluate the merit of an idea for a two-sided marketplace.

This post by the great Bill Gurley details some of the most important considerations while evaluating marketplaces. It is a fantastic post with lots of great nuggets that reveal themselves over and over again, and I urge all who are interested in this topic to read the post and extract the learnings for themselves.

Before we get into the specifics, let’s first understand the nature of the space itself. From Wikipedia:

Two-sided markets, also called two-sided networks, are economic platformshaving two distinct user groups that provide each other with network benefits.

The wikipedia article then provides examples of companies like Facebook, Match.com and eBay as two-sided marketplaces. While technically true, there is a subtle-yet-important distinction between a service like Facebook and a service like eBay when it comes to the two-sidedness of the marketplace. Specifically, it matters greatly whether the consumer of the service is also the buyer of the service. In case of eBay, the consumer and the buyer are the same person as the buyer is paying with real money in return for the value provided by the seller (supplier). In Facebook’s case, the consumer consumes information provided by suppliers (publishers) and pays for it with attention, but not money. This attention is then monetized through the Facebook platform and paid for, in real money, by the buyers (advertisers). Thus, Facebook is serving three different entities: suppliers, consumers and buyers, and can be considered to be a three-sided marketplace. This distinction matters greatly in how the incentives are structured for the supply side (adding value) and demand side (consuming value).

Since the distinction is important, but not clearly vocalized, I am going to define the term “strict two-sided marketplace” to mean two-sided marketplaces where there is an explicit monetary transaction between the consumer and the seller, and thus, the consumer is also the buyer. In short, marketplaces like eBay, Uber and AirBnb, and not like Facebook, Google or even Medium.

In this article, Ben Thompson talks specifically about innovation, Apple and Clayton Christensen, but the core point of his thesis is that it matters greatly whether the consumer and the buyer are the same versus not. The main point here is that the distinction is vastly important, and as such, it manifests itself greatly in the specific actions that designers must take to solve the two most important considerations in marketplace design — viz. trust and liquidity.

The authoritative article on building trust in such marketplaces is written by Anand Iyer and is available here. Anand has done such a good job that everyone who is thinking about building marketplaces must read that article at least a few times to grok the fine points. Most people intuitively understand the importance of trust and as consumers, we all make decisions based on trust (or lack thereof) when we decide whether or not to consume products and services. Product designers will thus do well to keep the idea of trust first and foremost in their minds. Ultimately, the key question behind building trust is whether or not the marketplace delivers on the promise of the use case for the consumer in a consistent, timely and predictable manner. To make good on this promise, the marketplace must provide the right set of tools for the supplier and enable them to deliver on said promise. This leads us directly to the other important consideration: liquidity.

I am going to focus the rest of this essay on specific considerations in evaluating a strict two-sided marketplace and understanding the key variables that can expand or shrink the total addressable market (TAM). Some of these can be controlled, while others are dictated by the nature of the goods/services being exchanged.

Evaluating Marketplaces

As mentioned above, Bill Gurley does a fantastic job in detailing some of the key considerations while evaluating marketplaces. The key point is that a lot of these variables are dictated by the nature of the goods/services being exchanged and product designers have very little control over them. However, while thinking through the design and specific decisions, one can target the marketplace to have the most chance of success in building liquidity.

To be comprehensive, I am going to list all the points made by Bill Gurley in his post, but am also going to add a few others that I think that he missed. While possibly less important for the kind of marketplaces that he had in mind, they keep coming up whenever I have discussions or am thinking about newer ideas. Here are the ones listed by Bill, in order, with my own thoughts for each of them:

  1. New experience versus Status Quo
    – whether or not the experience is a significant improvement over current use case. For example, the Uber experience is vastly better than that of a taxi.
  2. Economic Advantage versus Status Quo
    – To me, this advantage must be on the demand side. i.e. whether the marketplace offers a cheaper alternative. Again, UberX is cheaper than a taxi for comparable distances.
  3. Opportunity for technology to add value
    – For most discussions, this is a given. If this does not exist, the rest of the discussion is usually moot.
  4. High Fragmentation
    – Here is one where I disagree with Bill’s thesis, and subscribe more to Ben Thompson’s Aggregation Theory. Bill makes the point that high fragmentation is good as it provides easier entry into the market with less resistance from incumbents. I would argue that this was true when the suppliers still had a lot of control over distribution. But the internet has driven the marginal cost of distribution to zero, and as such, if the other factors are in favor, then marketplaces can, and will, take on concentrated incumbent suppliers and modularize them. In other words, high fragmentation is not necessary if the marketplace does a good job of aggregating demand. Even in highly controlled supplier markets, there are examples of hacking this kind of supply liquidity, and while initially challenging, these can deliver great results over the long run (e.g. Netflix).
  5. Friction of supplier sign-up
    – Again, this is a lot more tactical. Whether there is low or high friction, there are different strategies that can solve either problem. Bill does correctly surmise that the critical part is not supplier aggregation, but demand aggregation.
  6. Size of the market opportunity (TAM analysis)
    – Obviously, bigger is better. But yeah, no vanity here. Examine the TAM with optimism and paranoia.
  7. Expand the market
    – This is one of the points that I am yet to grasp in its entirety. While doing TAM analysis, one has to look at the factors today and what part of the market is addressable by the addition of the new service. The analysis to figure out new use cases and whether the marketplace expands the market is much harder to do, but also captures a lot of the value. Most entrepreneurs come up with far-fetched scenarios that expand their market. A realistic yet optimistic analysis is very difficult, but separates the good entrepreneurs, product designers and venture capitalists from everyone else.
  8. Frequency of transaction
    – Obviously, higher frequency is better. The flip side is to consider how low can the frequency get before the marketplace starts losing value to the consumer. In other words, at what frequency will the consumer go back to the status quo? For example, if the AirBnb experience was only marginally better and cheaper than comparable hotels, then demand stickiness would be a much tougher problem to solve because the frequency of transaction is very low.
  9. Payment flow
    – Excellent point that if the marketplace is part of the payment flow, then it can dictate some terms of commissions (and also, as we will see, liquidity). Some marketplaces simply match buyers and sellers, while the transaction takes place outside the marketplace (e.g. autos). In this respect, one must follow design principles for aggregation theory. In general, strict two-sided marketplaces have a lot more opportunity to be part of the payment flow, unlike the aforementioned three-sided ones. For example, Yelp is not part of the payment flow, and as such even though it is widely used to make consumer choices with high frequency of transaction + high average cost, it had to define itself as a slightly different marketplace where the supplier is also the advertiser/buyer.
  10. Network effects
    – Most successful marketplaces have some kind of network effects, but they are not all the same. All strict two-sided marketplaces have network effects on the supply-side where greater demand increases incentives for supplier (higher utilization and thus lower cost for services-based supply, or economies of scale for goods-based supply).

The above list is excellent. In addition, I keep coming back to a few more:

  1. Cumulative nature of supply
    – It matters greatly whether the supply is cumulative in nature, or is transient. There is always some ingress and egress of supply as new suppliers come in and old ones go out, some marketplaces have a more-or-less cumulative supply. For example, a house listed on AirBnb is much more likely to remain there. To some extent, that is less true for an Uber driver. For marketplaces selling goods, like Etsy, each seller listed adds greatly to the cumulative nature. This has benefits for all parties involved: buyer, seller and the marketplace.
  2. Size of transaction
    – Yes, this has to go in concert with the frequency of transaction. All things being equal, bigger is better. The key question though is how these two variables relate to each other. For example, AirBnb has low frequency, but a large size of transaction (rumored to be ~$400 on average). Thus, even if the AirBnb transaction occurs once a year, the fee collected (~$58) is very high ARPU (average revenue per user). The rest of the $400 is passed on to the seller. Uber, on the other hand, has a lower average transaction size, but the frequency is much higher. If the product of the two numbers is the same, higher frequency trumps larger size, as it also begets brand loyalty. Consumers are less likely to switch if they are already using one marketplace more often. On the other hand, if the product of size times the frequency is larger because of a larger average transaction size, then the marketplace must be designed with that in mind.
  3. Temporal nature of supply
    – Is the supply transient in nature, or mostly there? This is slightly different from the cumulative nature of supply. For example, a restaurant added to a marketplace is cumulative, but events at the same restaurant are transient in nature. Once the event has already transpired, it does not add any value for future liquidity. All transactions are ephemeral in nature, but if the underlying supply is transient itself, then it shrinks the marketplace liquidity. The marketplace designers will then have to keep adding new supply constantly, and this presents a significant challenge while balancing liquidity and demand.
  4. Local or global nature of the transaction
    – The Internet has done a wonderful job in bringing information about global supply to local areas. AirBnb is a great example of this, and even though supply is local in nature, the demand it spurs is global. Uber, on the other hand, is very much of a local player, and as such, can be looked at as the collection of many different local marketplaces. Yes, there is crossover where some consumers (riders) will travel and use Uber in other markets, but the large part of the design has to account for both supply and demand locally. This is why the marketplace dynamics for Uber is different in each city. As the product designer thinks about this issue, it is important to keep in mind that a strictly local marketplace shrinks the potential supply for demand. Over time, it can be built and exploited, but the job is much harder than building the whole network at once. Thus, for purely local marketplaces, a city-by-city (also called market-by-market) approach usually works best, even if it all happens at the same time.
  5. Single player mode
    – This is a really important concept from the demand side. Imagine Uber where there is no other rider in the system. The service will still be immensely useful to a single rider. In fact, a single rider does not care whether other riders are present in the system as long as the supply is liquid enough. So, while network effects will make the system more powerful, this concept allows the marketplace designer to employ growth techniques for demand that are somewhat independent of the supply-side. In the absence of the single player mode, the designer has to balance both supply and demand together, a much more challenging prospect.
  6. Seller/Buyer ratio
    – This is a rarely talked about concept, but for increased utilization of supply, the lower this ratio, the better for the marketplace. Lower ratio means that there are many more buyers, and thus, each seller has to service a number of buyers. This increases the utilization of supply, and incentivizes sellers to make marketplace supply their primary occupation. If sellers are professionally tied to the marketplace for their income, they are much less likely to leave the marketplace. Said another way and with the lens of aggregation theory, as the marketplace controls demand, the supply side loses power and gets modularized and interchangeable.
  7. Cross-over between buyer and seller
    – In some marketplaces, increased activity by the buyer also moves the buyer towards becoming a seller. For example, people who buy things on eBay also tend to sell things on eBay. Sure, eBay has professional sellers that forms the backbone of the supply, but the initial growth in liquidity comes from these very important buyers who tend to be sellers. However, even in these circumstances, the considerations for the buyers and the sellers are different, and the marketplace design should keep that in mind. Some marketplaces are just not suited for this. e.g. Uber drivers are actually less likely to be riders.

I realize that this list is far from complete, but I find myself coming back to these considerations in multiple discussions.

Product design is highly subjective by nature, but our job as product managers/designers/thinkers is to balance the different forces and come up with valid frameworks that enable predictable action.

In future essays, I want to tackle the question of how to systematically add and improve liquidity if the TAM is big-enough. And once there is enough liquidity in the marketplace, how to employ specific marketing techniques for scale. And while the considerations above are meaningful in my own analysis, there is a lot of potential to formulate a theory and understand/quantify the relative value of the different variables.

If you have any questions, or comments, you can find me at anuragmjain — at — gmail.com

Basics you wanted to know about big data, machine learning and artificial intelligence, but were afraid to ask.

Over the past few months, the field of artificial intelligence has been exploding. A lot of people I meet here in the bay area talk about it constantly, and they try to come up with different use cases for artificial intelligence. It is increasingly clear that artificial intelligence will be a major toolset of the future. I believe it will exceed the status of a toolset and find an evolutionary path of its own.

But the more conversations I have around this, the more definitions I hear around the different buzzwords. What is artificial intelligence? Is it the same as machine learning? Some people throw around words like Natural Language Processing (NLP). What is that? Most predictive analytics companies claim to be using some form of artificial intelligence. Are they really all using cutting-edge technologies? If not, what are they using? And how does it help or hurt them when competing with other companies who, in fact, are using some of the cutting edge tools?

Over the series of the next few blog posts, we plan to illuminate the key differences between what people are doing, how to think about machine learning and AI in your product, and how to prepare your company to be competitive in the future that is inevitable.

But first, some definitions. Keep in mind that these blog posts are written from the point of view of practitioners and not researchers (although we work hand in glove with researchers). Thus, we won’t get super technical about any of these items. There are people far smarter and far more articulate who have done an excellent job of demystifying the science behind all of these concepts. We will do a blog post compiling some of our favorite resources very soon. For now, we will focus on the practical aspects of the field and how company executives should be thinking about the best ways to use data to put their companies on the far end of the competitive spectrum.

Ok, enough chatter. On to some loosey-goosey definitions, along with a recap of some of the basics:

What is big data?
So, you have heard the term big data and understand that it is a large amount of data that could be structured or unstructured. As you know, it is important because there are meanings, patterns and predictive behavior hidden in the large swath of data. However, traditional computational and data processing techniques that we all grew up studying just don’t solve the problem of understanding the meaning behind such large amounts of data. Firstly, this large amount of data needs to be stored across hundreds (or thousands) or servers. Then, it has to be presented in a format where the data can be analyzed. Traditional techniques of analyzing massive amounts of data in one go just don’t work. This is the main problem that traditional analysts have. They just can’t hold and analyze like they did in the past. Along with the proliferation of the cloud, newer big data techniques can help wrangle this large amount of data much more easily. This makes it easier to handle ‘big data’. Which brings us to the next question:

How do we make sense of all this data? 
To make sense of the data, we first have to present it in a format that any algorithm can consume. The next part is tweaking those algorithms to get a desired understanding. Machine learning is one of the newer techniques that can help understand the patterns in the data without an analyst starting from a specific viewpoint. Actually, machine learning techniques have been around for decades (yes, decades). But in 2012, there was a major breakthrough that was able to get a phenomenal result in identifying handwritten digits. The technique that the researchers used came to be known as deep learning. Researchers, and then practitioners, all over the world rejoiced, and felt that this was the new silver bullet to solve the world’s data analysis problems. Coupled with the fact that everyone was generating vast amounts of data, researchers felt more confident that this technique + big data could find hidden meanings which were more difficult to find in the decades past. It looks like their excitement was well placed. Great progress has been done in this area, and the progress continues to surprise even the most ardent fans of the techniques.

So, machine learning lets computers find meanings in data?
In short, yes. But that’s a very broad definition. More specifically, machine learning refers to the idea of letting these new algorithms and techniques find meaning in data without starting from an analyst’s viewpoint. Let me give you an example. With data analysis, a typical analyst will come up with theories on how the data could be related and then validate those theories. Most of the time, their hypothesis proves incorrect, but not without giving them more information so that they can come up with a new hypothesis. Machine learning techniques turn this approach over on its head. By letting machines discover patterns in the data, they can be used to find highly complex relationships within the data which cannot be adequately modeled by the best of mathematicians. Exactly how they do this is the subject of another blog post, where we will cover basic concepts like supervised learning and unsupervised learning, and when each one makes sense. For now, let’s keep in mind that the machine learning techniques are more powerful and try to uncover patterns which the machine learning theorist or practitioner need not be aware of before the process begins.

Ok, I get it. Can machine learning be applied to ‘small data’?
Yes. It is not necessary that a large amount of data be present for the techniques to be successful. The simple way to think is whether the data contains enough information and structure to make some sense. For example, a list of 100 houses in a zipcode with prices and square footage will give one a very good idea how to price a new house given it’s square footage. However, if the data only contained house prices and the number of windows in the house, then that’s not a good indicator. The best way to think is that if a human can be trained to make some sense of the data without relying on other knowledge, then a machine can probably do so as well.

So, what is this artificial intelligence?Artificial intelligence is the most difficult one to define. I tried to read the definition on Wikipedia, and it gave me a headache. Everyone defines it differently, but in general it refers to the idea of computers and algorithms doing things that were earlier considered the dominion of humans. For example, understanding complex voice commands, sentences and phrases was considered near impossible about a decade ago, and yet, computers are able to do just that. Similarly, reading, characterizing and understanding handwritten signs, or the landscape while driving a car are all things that seem fantastic for a machine to be able to do. Ultimately, under the covers, it is a matter of getting a lot of information from various sources (multiple cameras and all kinds of sensors) and correlating it in a manner which is similar to how we make sense of the data. Hence, the term ‘artificial intelligence’ — there is a lot more complex “solving” and “learning” happening. Also, it sounds cool!

I hope the above gives you some sense of the world of machine learning and artificial intelligence. Over the next few posts, we will go a little deeper into each topic, while keeping in mind that our target audience are industry executives who should be prepared for the changes which are already occurring in their industries.

If you have any questions, feel free to email me or find me on linkedin. I’d love to hear from you if I can help you or your team with machine learning.

A march towards quality

So, I have been having discussions with a very good friend of mine over the past many months. While I have bounced around ideas on how to write about specific topics, I have never been able to actually start. Today, he challenged me to write something anyway, no matter how shitty it is. After racking my brains on how to come up with the perfect subject matter, I was reminded of this story from the book Art & Fear. Ever since I read it, I have been boring all my friends and everyone I meet with it. It is a famous parable in the book, and goes something like this:

The ceramics teacher announced on opening day that he was dividing the class into two groups. All those on the left side of the studio, he said, would be graded solely on the quantity of work they produced, all those on the right solely on its quality.

His procedure was simple: on the final day of class he would bring in his bathroom scales and weigh the work of the “quantity” group: fifty pound of pots rated an “A”, forty pounds a “B”, and so on. Those being graded on “quality”, however, needed to produce only one pot – albeit a perfect one – to get an “A”.

Well, came grading time and a curious fact emerged: the works of highest quality were all produced by the group being graded for quantity. It seems that while the “quantity” group was busily churning out piles of work – and learning from their mistakes – the “quality” group had sat theorizing about perfection, and in the end had little more to show for their efforts than grandiose theories and a pile of dead clay.

While the implications of the story are obvious, and most of you are nodding your head vigorously while applauding the simplicity of the message, it is worthwhile to delve a little deeper. The story is particularly powerful because our initial reaction is one of amazement and awe. However, the awe subsides very quickly as comprehension takes hold and it is immediately obvious from our own experiences that this must be true.

Then, why is our initial reaction one of awe? I believe it is primarily because we are less inclined to believe that a random group of students can iteratively learn without having any outside impetus to do so (since they were tasked only with coming up with quantity, not quality). However, when the discussion becomes personal, we can feel the truth of the story because clearly, that is exactly what we would do in that situation. I certainly would like to think so.

In any case, the clear message is that it’s far superior to start from a horrible iteration and work your way up to quality instead of sitting, reading and theorizing about the right way to get there. I must admit, I have been plenty guilty of the latter. Actually, in software development, quantity does inform quality. Over time, the most prolific developers end up writing the best code.

So, I am going to start small and get back into the habit of writing random thoughts on this blog. If, and when, the material becomes post-worthy for other places, I might actually publish. But for now, it is all quantity with absolute disregard towards quality.


Remembering Steve Jobs

Much has been written about Steve Jobs in the last couple of days. People throughout the world have showered their love and affection for the public figure of Steve Jobs. Apple’s products showed that their creator must be a force to reckon with. For people here in the valley, especially enchanted with products, design and innovation, Steve was a legend way before his death.

I never met the man, and only knew of him through his products, and his public appearances and quotes. Most people, like me, knew Steve Jobs only this way. So why is it that we all feel so overwhelmingly sad at his demise? Is it because he was a great man of our times who brought the future to the present? Is it because he showed us what beautiful products look like? Is it because that he reminded us that true beauty is, in fact, universal? Is it because we all know that products in the future will just not be the same anymore without the guiding force of superior design?

It is all of the above and much much more. Most people say that Steve’s brilliance lies in the fact that he really knew what customers wanted. I think that’s totally incorrect. I don’t think he cared about what other people wanted. But I think he did care immensely himself. He was just trying to make the best thing he could given the technology of our times. Steve wanted more, because he knew that more was possible. The main difference between him and other innovators has been that he went much further in trying to solve a problem. Most people would stop at a very very very good product. But not Steve. He wasn’t trying to be perfect, but he was making the best thing that appealed to him. And because he was incessant in his quest to find the thing that truly appealed to him, it actually appealed to the masses as well.

Yes, Steve inspired us all to make beautiful things. And yes he did inspire us to design things well. But the reason we really miss him is because he showed us that we should do something wonderful that we want to do, and not because someone else demands it. Steve showed us a way to live our life which breaks all traditional and cultural norms. He showed us that we should stop worrying about what other people (including customers) think, and instead worry about whether we are being true to ourselves in building what we can. He showed us that it’s important to have taste, good or bad, and bring it to what we do. He showed us that good enough just isn’t good enough.

Steve taught us that it’s important to do the best you can with whatever is available and not give up too soon. Not because we should satisfy other people’s cravings, but for our own sake. Because, after all, this is our life we are talking about. Noone else teaches us that lesson. Only the people, like Steve, who have lived their life by that code can inspire us to attempt to do the same. And they teach us that the purpose of our life is not to earn great riches or to have huge impact or build things that others want, but simply to do something wonderful.

And for a reminder of that lesson, I thank you Steve, and may you rest in peace.

What it means to create a vertical social network

This is a part of the ongoing series of blog posts detailing how one should think about social features, and adding them to your product. Earlier, we discussed the various types of social networks.

In this post, we will delve deeper into what it means to create a vertical social network. To recap, vertical social networks have a place in the world of the internet. Although juggernauts like Facebook have much greater network effects because everyone and their uncle are on it already, they still lack features that are domain specific. Specifically speaking, broader social networks are forced to define the relationships between people first, and only have looser relationships with content. This has the advantage that any content can be shared across those networks, but the challenge is in relating different pieces of content to each other, as well as to derive deeper and more meaningful relationships from the posted content. For example, I was sharing on Facebook about all the places I was visiting, but Facebook was not able to put that information together in a meaningful way. Mostly because it was all written in text, and it’s just not an easy technical thing to do.

Enter Foursquare, a vertical social network based on the idea of geographical check-ins. Lo and behold, we suddenly have structure to that information in the form of name of the place, category, lat, lng, etc. Suddenly, I can put my places on a map, see where my friends have been (again on a map), and use it on the go. Thus, vertical networks, although restricted by the idea that they only attack a subset of content, they can enable more meaningful scenarios for using the information generated.

However, one key point to remember is that simply by adding structure, we will ask the user to bear an extra burden by providing us with the data that ‘we’ care about. So, how do we make it so that the user likes generating that information for the network’s benefit?

A good principle to follow while designing this is:

     Everybody talks. Nobody listens.

I will write in detail later examining this idea in depth, but for now the point is that the key to building a successful network is not to look from the point of view of the person who is going to consume the information, but rather from the point of view of the person generating that information.

Thus, the challenge for the product designer is to balance the structure in the information with the fun of adding that information. And this is where most product designers go wrong, and the successful ones have got it right. Most products look at the information that must be added as a chore, which will reap dividends when network effects take place. However, that is exactly the wrong way to look at it. Most people still do not care about consumption until after critical mass is built, and therefore, getting to critical mass is the number one step.

The right way is to see how much fun we can make adding the information. If people are already doing something, then you must find the one thing that will make users add that information to your product instead. And then network effects take place. And then other non-generators will consume. Not before. For example, people check-in on Foursquare or on Facebook for fun, and then other scenarios become available. Not the other way round.

So, if you can take only one thing from this post, take this:

Design your interaction in the network primarily for the person who will be generating the information, without looking at other people’s information.

As usual, let me know your thoughts in the comments below.

Types of social networks

This is a part of the ongoing series of blog posts detailing how one should think about social features, and adding them to your product. Earlier, we discussed the social follow/friendship models, and where each one is applicable.

The models are not only interesting from a theoretical perspective, but also have deeper implications on the kind of network affiliations and bindings they enable. Depending on the kind of model the product designer decides to follow, creates either broader networks, or tighter ones.

For the purposes of this post, we will examine the difference between a broad social network and a vertical one. It is common to hear lots of companies and products say that they are creating a vertical social network. What do they really mean? Let’s first look at what a broad social network is, and then see where the verticals lie.

From wikipedia,

A social network is a social structure made up of individuals (or organizations) called “nodes”, which are tied (connected) by one or more specific types of interdependency, such as friendship, kinship, common interest, financial exchange, dislike, sexual relationships, or relationships of beliefs, knowledge or prestige.

Essentially, a social network consists of people connected to each other. Right? Well, yes and no. The definition is true, but it is not precise. People connected to other people because of social connections is one thing, but people connected to other people because of their own relationship to some content that both value is totally another. In that sense, Facebook started off as a true social network, while Twitter started off as an interest network.

These days, a typical social network starts by having users define their relationship first with content, and then have ‘better’ relationships with other people based upon the collection of content that might be interesting to both parties. When the field of content and the relationships one can have with this field is broad, we get a broad social network. However, restricting this field creates a vertical social network. Thus, vertical networks are, by definition, a subset of the possibilities of connections between not only people, but also between people and content.

So, why do people build vertical networks, when the field is limited? It’s kind of like the niche idea. By focusing on a specific area, products hope to achieve 2 primary goals:
1. more structured information about the field of content in question, and
2. better and more meaningful relationships between people based upon the exploitation of this structure in the content.

Since broad networks have mechanisms in place to interact with all kinds of content, they cannot add structured information about any specific area of content. For example, Facebook cannot allow us to interact with Restaurants in a restaurant-specific way, but must leave it open to accept a restaurant as just a place (categories do come in play, but only a little bit). This hole left by the dominant networks beckons to entrepreneurs with specific needs, and they try to create the vertical networks with structured information.

Note that we have only talked about the type of social networks in this post. Some people think that Twitter has defined the interest network, but I think they are not quite correct. It’s true that we follow some other people based upon our interest in the subject of what they are writing, but there is a lot of evidence to suggest that twitter is also used more as a propaganda mechanism by marketers, and as a means of keeping in touch by other people. In either case, it doesn’t really satisfy the criteria of being an interest network.

If we were to define the relationship of a person with content to find out about their interest, then Google probably has the richest information available in the form of search. Of course, privacy concerns and laws prohibit them from leveraging that advantage fully for networks like Google+ (for example, G+ could suggest me people to follow based upon my recent searches. How useful that would be is anybody’s guess).

In the next post, we will look at the specifics of what it means to create a vertical social network and the challenges faced in doing so.


Social Models

There has been a lot written about the kind of ‘social’ graphs prevelant on the internet. I am not adding any great new information, and readers are advised to check out the following posts:



Here, however, I am including this discussion mostly for completion sake. But, I also think that the discussion is valuable to point out the key differences between the kind of social models, as well as understand the history behind their evolution.

The two main types of social models under consideration are:
1. Friendship (the non-directional graph)
2. Follower (directional graph)

The original social model of friendship started primarily with sites like friendster. However, it wasn’t until the advent of MySpace that it really took hold. Facebook, of course, took it to a whole new level. The model primarily involved a handshake element. i.e. if I have to become your friend, you must also be my friend. Although this was geared more towards privacy, it meant that the field was automatically somewhat limited and constrained by the intersection of the interests of the two people involved. If only one of them was interested in knowing about the other, the model wouldn’t work. For example, I am interested in being a friend of Obama, but I don’t think he cares (at least not at present). Thus, this kind of model results in a true ‘social’ graph. If I am not socially connected to another person, this graph will not take hold.

The follower model got rid of this barrier by allowing anyone to follow anyone else. Now, I can follow Obama, and he doesn’t have to follow me. This way, his ‘inbox’ is not cluttered by my updates. It is arguable that twitter’s main innovation has been the creation of the follower model and the subsequent choices with regards to privacy in keeping those updates public, by default.

The upshot of the follower model is that it ends up creating not a truly social graph, but more of an interest graph. Now I can follow people based on my interests, and I don’t have to know them socially. The beauty of this method is that I am now in control of my interests, and am free to follow anyone who is speaking about things that might satisfy my interests.

The follower model has taken hold in all verticals where new products are emerging to let people share their opinions about any topic, and for other people to follow these opinions. In each case, the person who is following can make the decisions to un-follow the person, or follow new people. This provides a great sense of control to the user, and lightens the burden of application developers to suggest better more interesting content. Because users can now simply follow other users to discover new content, application developers can concentrate on developing tools to allow discovery of people to follow, and not necessarily content.

However, the friendship model is still relevant in cases where we care less about interest, and more about social engagement driven by already established norms. For example, LinkedIn still follows this model, and it makes sense for their product. I must know the other person in order to be connected on LinkedIn so that I can then use that connection to find better opportunities later, find good candidates, etc. Also, once the friendship model takes hold, it’s much more difficult to break since noone wants to break those social connections when they are backed by real-world connections. So, this network becomes much more powerful, and can only grow.

So, both models have their place, but it depends on the product how best to use the models. I have seen hybrid models where you can follow users, but can form stronger bindings based on friendships. Facebook seems to be moving to this system as well with the recent addition of their “Subscribe” functionality, where a user can subscribe to another user’s public updates (twitter or Google+ anyone?).

What do you guys think?

Roadmap for building ‘social features’ into the product

At the core of any product, you are going to have people interact with some piece of content. However, before we jump into how to design a viral loop for free marketing and growth, let’s examine the right way to think about how to make your product more social, and how to build the social elements into the basic value proposition.

The following is an outline of the topics I am going to cover about how to add social features into a product. Some of them might be very basic, but it’s important to cover them so that it’s clear what we are all talking about.

  1. Social Models
    • friendship (symmetric)
    • follow (asymmetric)
    • advantages/disadvantages of each
  2. Social network types
    • broad social network
    • vertical/interest social networks
  3. What it means to create a ‘vertical social network’
  4. Traditional way to think about it
    • nodes and edges
    • e.g. Facebook (edges are friendship)
    • e.g. Twitter/instagram (edges are interest)
  5. New way to think about it
    • Layering
    • edges are “MY RELATIONSHIP” to that content (picture, food, link, story, blog post, etc.)
  6. Revisit: What it means to create a ‘vertical social network’
  7. Social lens for content
    • Different Views possible based on relationships and friends’ friends
  8. Recommendation systems (discovery of new content)
    • Algorithmically jumping from one piece of content to another
    • Letting users control what they see (through the social lens)
    • Examples

Let me know your thoughts, or if I have missed anything.