New AI-oriented Features in Android OS

Some interesting creative and AI-ish features are starting to surface int he newer versions of Android and some of its featured apps:

Google is rolling out a trio of system updates to Android, Wear OS and Google TV devices. Each brings new features to associated gadgets. Android devices, like smartphones, are getting updated Emoji Kitchen sticker combinations. You can remix emojis and share with friends as stickers via Gboard.

Google Messages for Android is getting a nifty little refresh. There’s a new beta feature that lets users add a unique background and an animated emoji to voice messages. Google’s calling the software Voice Moods and says it’ll help users better express how they’re “feeling in the moment.” Nothing conveys emotion more than a properly-positioned emoji. There are also new reactions for messages that go far beyond simple thumbs ups, with some taking up the entire screen. In addition, you’ll be able to change chat bubble colors.

The company’s also adding an interesting tool that provides AI-generated image descriptions for those with low-vision. The TalkBack feature will read aloud a description of any image, whether sourced from the internet or a photo that you took. Google’s even adding new languages to its Live Caption feature, enhancing the pre-existing ability to take phone calls without needing to hear the speaker. Better accessibility is always a good thing.

To Microservice or Not

This article reinforces what I have been saying for years: microservices are a big mistake, especially for developers who don’t understand distributed systems, high availability and observability. To be successful, they must be properly designed and implemented, unlike most of the copy-and-paste, we-don’t-need-no-stinkin-design development that is seen today.

From the article:

We engineers have an affliction. It’s called “wanting to use the latest tech because it sounds cool, even though it’s technically more difficult.” Got that from the doctor’s office, it’s 100% legit. The diagnosis was written on my prescription for an over-the-counter monolith handbook. From 2004. Seriously though, we do this all the time. Every time something cool happens, we flock to it like moths to a campfire. And more often than not, we get burned.

TinyML and the Future of Design

Interesting post on how ‘magical experiences’ fueled by AI and machine learning will change how products are designed and used.

There is growing momentum demonstrated by technical progress and ecosystem development. One of the leading startups that are working on helping engineers take advantage of TinyML by automating data collection, training, testing, and deployment, is Edge Impulse. Starting with embedded or IoT devices, Edge Impulse is offering developers the tools and guidance to collect data straight from edge devices, build a model that can detect “behavior”, discern right from wrong, noise from signal, so they can actually make sense of what happens in the real world, across billions of devices, in every place, and everything. By deploying the Edge Impulse model as part of everyone’s firmware, you create the biggest neural network on earth. Effectively, Edge Impulse gives brains to your previously passive devices so you can build better a product with neural personality.

Another interesting company is Syntiant, who’s building a new processor for deep learning, dramatically different from traditional computing methods. By focusing on memory access and parallel processing, their Neural Decision Processors operate at efficiency levels that are orders of magnitude higher than any other technology. The company claims its processors can make devices approximately 200x more efficient by providing 20x the throughput over current low-power MCU solutions, and subsequently, enabling larger networks at significantly lower power. The result? Voice interfaces that allow a far richer and more reliable user experience, otherwise known as “Wow” and “How did it do that?”