At 213 slides, Mary Meeker’s anticipated annual “Internet Trends” report was a gold mine of data on everything from debt-to-GDP ratios by country to U.S. Internet advertising growth.
The Kleiner Perkins Caulfield & Byers general partner unveiled her report on Wednesday at tech blog Recode’s Code Conference in Dana Point, Calif. While you can read the entire deck here, below are five trends from Meeker’s report that we found especially notable.
1.Slowing global Internet growth: Global Internet user growth was flat from 2014 to 2015 at 9% year-over-year, and down from more than 15% in 2009. Why? Meeker said it’s harder to acquire new Internet users globally now that such a high portion of people in developed countries are already online. Internet users in less developed countries are more challenging to gain because of the high cost of smartphones relative to people’s incomes. The cost of a smartphone, for example, is 15% of per capita income in Vietnam and 10% in Nigeria and India, a McKinsey study found. The notable growth anomaly is India, where Internet growth accelerated by 7%. This boost helped India surpass the U.S. to become the second largest user market, after China. The growth of global smartphone users is also slowing.
2.“Easy” economic growth is over: Global economic growth in six of the last eight years is below the 20-year average of 3.8% (from 1996 through 2015). Meeker says the cause is the decline of five of the biggest growth drivers of the past two decades: Slowing connectivity growth (Internet users have reached 3 billion, up from 35 million in 1995), falling GDP growth in emerging countries, rising government debt, plummeting interest rates and a global population that is aging and growing more slowly. The opportunity? Meeker said the slowdown creates opportunities for companies that create efficiency, add jobs, lower prices and innovate. By region, China and emerging Asia made up 63% of total real GDP growth and America, Europe and Japan together made up 29% of GDP growth.
3.The era of the image: Images are growing in importance and use, while text, and specifically textual search, are fading. Meeker said in five years, at least 50% of searches will be made through images or speech. The rise of images has a lot to do with users’ increasing use of smartphones for storytelling, sharing, messaging and creative expression. Advertising will naturally continue to be built into visual experiences through methods such as user-applied filters. Meeker says Generation Z (people age 1 to 20) will be known for their use of images. Video is becoming increasingly social (think live sports events.) And among social network users now, Facebook FB -0.52%, Snapchat and Instagram as the leading platforms for engaging millennials.
4.Messaging as the new mobile home screen: Over time, messaging apps could overtake the home screen on mobile devices. This is believable given that 80% of users’ mobile time is spent in three apps, and the average global mobile user accesses just 12 apps daily. The most commonly used apps in 2016 globally are Facebook, WhatsApp and Chrome. Messaging will shift from being simple social interactions to increasingly expressive over time and will include more and more business-related interactions. Meeker lists WhatsApp, Facebook Messenger and WeChat as the current messaging leaders.
5.Rise of voice interfaces: Meeker said voice should become the most efficient form of computing input, largely because it is hands and vision-free. Voice lends itself to an “always on” way of life. Humans can speak 150 words per minute, for instance, but can type only 40 words per minute. The conversational aspect of the medium lends itself to personalized experiences with computers understanding context from previous questions the user has asked and the user’s location. While many voice recognition tools can be frustrating to use now, Meeker said when speech recognition reaches 99% accuracy, people will go from barely using the tool to using it all the time. Speech recognition accuracy rose from about 90% in 2016 from about 70% in 2010. And the use of voice has been risen noticeably. Google GOOGL +0.67% voice search queries, for example, are up 35 times since 2008. Sales of voice-based devices such as Amazon Echo could be just about to take off, compared to more text-dominated devices such as the iPhone, whose sales peaked in 2015.
The Portals Project: This gold box is ‘better than Facebook’
Los Angeles (CNN)Thousands of commuters buzz by it; dozens more see it from the Starbucks line less than 100 feet away. But only a few enter this gold box in the middle of downtown Los Angeles’ Grand Park.
As the World Turns to Digital
#1 Apple ($582 billion)
#2 Alphabet ($555.7 billion)
#3 Microsoft ($452.1 billion)
#4 Amazon ($364.4 billion)
#5 Facebook ($358.6 billion)
Facebook just edged out Exxon which was #6 at $358.3 billion.
For the first time in history, the top 5 companies are all digital companies; they are all US companies —but with global reach, impact and influence. They are all relatively young (less than 40 years old) and they are likely to impact traditional industries as they digitize their products, processes and services. These companies have already influenced several sectors such as healthcare, retailing, media & entertainment, telecommunications, automotive, advertising, marketing & communications and so on.
I believe that they are just getting started as they expand their scale and scope at speed that’s unprecedented. With their increasing R&D investments and patenting proclivity, they could exert significant influence as we rely more on artificial intelligence in areas such as conversational bots (example: Amazon’s Alexa and Apple’s Siri), drones (Amazon, Facebook, Google), healthcare (Alphabet and Apple), virtual reality (Microsoft hololens and Facebook Oculus), cloud (Amazon Web Services, Microsoft Azure, Google and possibly the other two) and Internet of Things (where all five would jockey in different ways). More opportunities to enhance efficiency as well as usher innovations that solve fundamental problems in industry and society.
We are just getting started and the road to 2020 and beyond will see greater dominance and influence of these digital giants and others such as GE (ranked #8), AT&T (ranked #9), Verizon (ranked #15), Alibaba (ranked #16), Intel (ranked #26) and IBM (ranked #32).
It’s a far different state of affairs than the irrational euphoria of the dotcom boom and bust of 2000. We are truly shifting to a post-industrial, digital era.
Hybrid Apps and the Future of Mobile Computing
Learn how hybrid app development is pulling ahead of native in the constantly changing and fluctuating mobile ecosystem.
The jury is out on which type of mobile app is the future of mobile computing. The stats point to native being the dominant app type in use. Most of the top 100 apps in the app stores are native, and comScore reports that 50% of all time spent digitally is spent on mobile apps (though it doesn’t give a split between native and hybrid), and just 7% is spent on mobile web apps.
While native apps are great for engagement, mobile websites still draw a majority of traffic. Applause accurately says, “The Web Gets Eyeballs, Apps Keep Them.”
Clearly, relying on native mobile apps is not enough. For this reason, Bloomberg and many large mobile app publishers use both web and native mobile apps, not wanting to miss out on either. However, this is far from efficient. What we need is to build and ship mobile apps much like web apps—Deploy once, and it works across all platforms. Fortunately, there are exciting developments on this front.
On the other side, the two major mobile platforms, iOS and Android, are taking steps to make mobile web apps function like native apps, allowing web apps to place their icons on the homescreen or app drawer, send notifications, and even leverage device functionality. Google’s Progressive Web Apps are the most recent development in this regard, and there are already numerous examples of apps that have gone progressive.
As these two trends converge in the future, they will lead to a shift from native mobile apps to hybrid apps. There are a couple of reasons why hybrid apps are set to trump native apps in the near future:
App Store Limitations
Today, releasing a native mobile app involves packaging the code, submitting it to the app store, and waiting for it to be approved. The entire process can take anywhere from two to seven days. This is an eternity in the mobile world. Mobile app developers (especially those that already practice DevOps for their web apps) want to be able to update their mobile apps like their web apps, multiple times a day if necessary. This is not possible with the limitations of app stores, and hybrid apps are the way out.
The Rising Talent Gap
Code.org estimates there will be 1.4 million computing jobs available by 2020, and only 400,000 computer science students. This is also true for mobile development. Truly great iOS and Android developers are a rare find. It’s a better strategy to make the best use of the existing talent you have than to leave your mobile development at the mercy of scarcely available new talent.
Faster Time to Market
The popularity of mobile apps rises and falls faster than their web counterparts. Ratings, reviews, installs, daily active users and churn rate all add up to decide the fate of a mobile app. In this fast-paced world, hybrid moves you faster from idea to app than native.
DevOps for Mobile
Finally, hybrid apps let you extend DevOps to your mobile apps, too. They let you go from mammoth quarterly app updates to a bi-weekly cycle, and eventually let you update as frequently as your web app- which is close to impossible with native apps today. To update at this frequency, you’ll need to automate the two key parts of your continuous integration (CI) process: builds and tests. This is where tools like Git, Jenkins, and Appium have a key role to play. When well-integrated, they can let you focus exclusively on developing and testing your app, rather than worrying about mobile platforms’ norms. This gives you the confidence to release multiple times a day, and take ownership of your mobile development process.
This post will sound too one-sided if I ignore the fact that as of today, native apps deliver a much better and faster UI than hybrid apps. This is the single biggest reason they’re so popular with developers and users alike. However, hybrid apps are fast approaching native-like functionality. All of the reasons above add up to show why native apps, though the de facto choice for many today, can’t hold that position for too long.
The mobile ecosystem changes faster than we’d like to believe. And it won’t be long before we look back at how primitive our mobile app development was in the era of app stores and their policing of native apps. Hybrid apps are the future of mobile computing.
Facebook Announces “Typing-by-Brain” Project
First it was Elon Musk, now Facebook. Suddenly, all the big Silicon Valley players want to get into brain tech.
Yesterday Facebook announced that it’s working on a “typing by brain” project. At its developer conference, Facebook executive Regina Dugan promised that this brain-computer interface will decode signals from the brain’s speech center at the remarkable rate of 100 words per minute.
Dugan, who runs the Facebook moonshot lab known as Building 8, said the technology for decoding brain signals will be non-invasive. That sets Facebook’s efforts apart from Elon Musk’s mysterious new Neuralace company, which is working on tiny implants called neural dust that would likely be embedded in the blood vessels of the brain. Dugan said that Facebook has no plans for an invasive implant, saying, “Implanted electrodes simply won’t scale.”
The promise of 100 words per minute represents quite a leap from the current speed record. In February, Stanford researchers enabled a paralyzed patient to type 8 words per minute—and that was using a device implanted in his brain. In that experiment the implant was placed in the patient’s motor cortex, and he imagined moving a cursor over a screen to select letters.
Jaimie Henderson, the Stanford neurosurgeon who co-led that research, says his team searched the scientific literature for prior examples of typing-by-brain technology for people with paralysis, looking at both invasive and non-invasive systems.
The highest performing non-invasive system they found was a 2008 study from a research group in Germany that worked with ALS patients. That 2008 study “reported performance of between 1.5 and 4.1 correct characters per minute,” Henderson told IEEE Spectrum. “Assuming an average of 5 letters per word, this is between 0.3 and 0.82 words per minute.” He added that other groups are working on non-invasive systems for able-bodied people, but he hasn’t looked into those speed records.
While Dugan said Facebook’s technology will read from the brain’s speech center, it’s not clear which brain region she’s referring to. A region in the frontal lobe called Broca’s area has been known to be involved in speech production since the 1860s, but today’s neuroscientists are still figuring out the roles that many other brain regions play in speech planning and articulation.
The non-invasive technology capable of pulling off this technical feat is also still a mystery. Most non-invasive brain studies rely on EEG, where scalp electrodes provide a rough general readout of the activity of large groups of neurons.
But Facebook has something else in mind. Dugan said that the gear will use optical imaging, and Facebook press release stressed that optical imaging “is the only non-invasive technique capable of providing both the spatial and temporal resolution we need.” A Facebook spokesperson wouldn’t provide any technical details on the approach, saying only that the Building 8 team is developing optical sensors that can be worn on the body. This tech doesn’t exist yet, but they’re working on it.
To try to get some insight, IEEE Spectrum contacted an expert on speech and language processing in the brain: Thomas Naselaris, an assistant professor at the Medical University of South Carolina. Prior non-invasive “brain spellers” have relied on EEG or fMRI, he said, but those systems can’t decode brain signals with high fidelity, so they often rely on the user making binary choices to winnow down a group of letters until they get to the letter they intend to type. It’s a tedious and slow process, he said.
For Facebook to achieve whole-word or sentence decoding, they’ll have to use a drastically different brain imaging system, he said. “Our understanding of the way that words and their phonological and semantic attributes are encoded in brain activity is actually pretty good currently, but much of this understanding has been enabled by fMRI, which is noninvasive but very slow and not at all portable,” he said. “So I think that the bottleneck will be the imaging technology.”
Mark Zuckerberg added his perspective on the news in a post soon after the announcement, presenting the brain-typing project as a natural evolution of Facebook’s mission to help people share their interior worlds. If they like sharing comments, photos, and videos, why not directly share their thoughts too?
Our brains produce enough data to stream 4 HD movies every second. The problem is that the best way we have to get information out into the world — speech — can only transmit about the same amount of data as a 1980s modem. We’re working on a system that will let you type straight from your brain about 5x faster than you can type on your phone today. Eventually, we want to turn it into a wearable technology that can be manufactured at scale. Even a simple yes/no “brain click” would help make things like augmented reality feel much more natural.
Facebook also sought to get ahead of privacy concerns. The underlying message: Don’t worry about the social network introducing a direct thought-to-comment feature that tells your friends what you really think about their posts. Just read the press release and rest assured:
This isn’t about decoding random thoughts. This is about decoding the words you’ve already decided to share by sending them to the speech center of your brain. Think of it like this: You take many photos and choose to share only some of them. Similarly, you have many thoughts and choose to share only some of them.
We’ll give you updates if any details emerge about which brain region Facebook is targeting and what technology they’ll use to extract the signal.
The Facebook spokesperson did divulge a few names of researchers who have been recruited to this effort: Edward Chang at UC San Francisco, Nathan Crone and and Mike Wolmetz at Johns Hopkins University, and Jack Gallant at UC Berkeley. These researchers study the neural circuitry of speech and are investigating where semantic concepts are organized and accessed in the brain. It will be interesting to see what they do for Facebook.
Whitler: As we move rapidly into the holiday shopping season, what are the biggest changes impacting marketing?
Peluso: This is an exciting period for CMOs and CEOs everywhere. You start working on holiday in June, so the preparations, merchandising, engagement, and relationship developing activity have been place for some time—it’s fun when it starts to crescendo and you get to see the planning come to fruition. Over the years, we’ve seen a big shift to online shopping, which means there is an increase in the amount of data CMOs can use to understand their business and enhance the consumer’s shopping experience. However, this year, what I think is most exciting for marketers is the opportunity to use AI to improve CX (consumer experience). AI empowers marketers to not just use readily available data, but to put dark data to use for the first time.
Whitler: How can marketers use AI to enhance CX? Any examples?
Peluso: Let me provide you with four different examples.
1. AI powered gift selection: This is a tool that retailers like 1800-Flowers.com are using to help consumers pick out just the right gift. For example, 1800-Flowers.com created “GWYN” (Gifts When You Need), a new AI-powered gift concierge that behaves like your own “personal assistant” and learns your preferences as you interact with the system. Through a series of questions, it can get smarter and predict the type of gift that might be most appropriate for somebody. For example, a customer might type, “I’m looking for a gift for my mother,” and GWYN will be able to interpret their question, and then ask a number of qualifying questions about the occasion, sentiment and who the gift is for to ensure she shares the appropriate, tailored gift suggestion for each customer. Importantly, this is different than conjoint or even Bayesian methodologies, because Watson understands, reasons and learns as it interacts with people in natural language and then applies that insight to the gift recommendation. It pulls data from the interaction but also many other sources such as consumer buying trends and behaviors.
2. AI powered product selector: The North Face, an outdoor apparel, equipment and footwear retailer, launched a new interactive online shopping experience powered by IBM’s Watson. Consistent with The North Face brand’s mission of applying technology to transform the retail experience, customers can now use natural conversation as they shop online via an intuitive, dialog-based recommendation engine powered by Fluid XPS and receive outerwear recommendations that are tailored to their needs. Utilizing Watson’s natural language processing ability, XPS helps consumers discover and refine product selections based on their responses to a series of questions. For example, after a shopper enters details on a desired jacket or outdoor activity, XPS will ask questions about factors like location, temperature or gender to provide a recommendation that meets the shopper’s specific usage and climate needs.
3. AI powered Out of Stock Management: A key challenge for retailers is managing their inventory levels. Ideally, you have just the right amount of stock on hand to meet consumer needs. If you are out-of-stock, you risk upsetting the consumer and having them go to another store. If you have too much stock, you have wasted money that you could have used elsewhere. So how can AI combat being out-of-stock? Watson is working with retailers to monitor weather, purchase rates and consumer behavior to do a better job of managing and monitoring supply chains to right size inventory levels and avoid out-of-stocks. The tools we use are called “IBM Commerce Insights” and “Watson Order Optimizer”.
4. AI powered Consumer Insight: AI is changing how marketers generate insight about consumers to provide more contextual relevance. Understanding things like social profiles, movement, weather, and behavior, AI can help marketers understand at a more granular level what consumers want and need. Consumer needs are dynamic—not static—and require an insight machine that can take this dynamism into account and feed it into your marketing plans. AI goes through a progression of understanding, reasoning, learning, and then adapting insight. Further, AI can include a lot more information in its learning process so that the marketing is more customized at the individual level. For example, Watson AI includes a tone analyzer. The system understands (through augmented intelligence) natural language and it learns over time so that you can reason and adjust offerings. Consider cancer patients. By using the tone analyzer, Watson’s AI can better assess consumer reactions to different treatment protocols and tailor the plan to the individual patient to increase compliance. The potential here is unlimited.
For more insight from IBM, check out these articles: 1) Why marketers should care about weather as much as merchants, 2) IBM study finds CMOs face a challenging role, 3) Cognitive technology and why marketers should care, 4) Highlights from IBM’s CMO Huddle, 5) Insight from IBM’s Advani on turning data into insight, and 6) 2017 Marketing predictions from CEOs, CMOs, and Authors.
El Matrix Voice transforma la Raspberry Pi en un Amazon Echo
Este dispositivo de código abierto permite a los programadores armar su propia versión de Alexa.
Recientemente la compañía Matrix Labs lanzó un accesorio para la Rasperry Pi llamado Matrix Creator, el cual le agrega a este pequeño computador diferentes usos y sensores como de temperatura, Ultravioleta, presión, etc.Siguiendo este paso, la compañía creó una especie de sucesor de Creator llamado Matrix Voice, que solo se enfoca en el reconocimiento de voz y es más barato. Tuvo una exitosa campaña en Indiegogo y planean distribuirlo en mayo de este año (vía TechCrunch).
Como mencionan en el video, la compañía está buscando unir el internet de las cosas con inteligencia artificial, creando este dispositivo de reconocimiento de voz de código abierto. Incluye siete micrófonos y un anillo de luces LED; permite integrar softwares personalizados y otros ya establecidos como Alexa de Amazon Echo y Google Voice Service.
Sígannos y comenten en Facebook.
¿Qué se necesita para que Chile adopte en forma masiva la agricultura de precisión? Stanley Best, Director Nacional de AP del INIA, marca el camino.
Chile una extraña contradicción respecto a la agricultura de precisión. Por un lado, sus características y las ventajas que promete parecen caer como anillo al dedo a sus ambientes heterogéneos y manejos moldeados por la necesidad de aplicaciones exactas y mejor calidad en los productos. Y por otro lado, pese a que existe un cierto consenso sobre sus ventajas, aún permanece la reticencia a adoptarse esta tecnología de forma masiva, con mitos y prejuicios demasiado arraigados al inconsciente generalizado.
Mientras la tecnología avanza, ya sea con pequeños pasos que mejoran lo preexistente o con grandes anuncios que suponen continuas revoluciones, la agricultura de precisión se hace ya no sólo relevante, sino prácticamente imprescindible en la carrera por la eficiencia y sustentabilidad del negocio.
En el marco del “Programa Estratégico Industrias Inteligentes”, de CORFO, Stanley Best, Director Nacional de Agricultura de Precisión del INIA, aclaró los conceptos que rodean el tema pero también marcó pilares infaltables como la tecnología habilitante, la conectividad y capital humano.