mobrec

My Personal Infocloud

This morning I’ve been having a flash back to when the Mac + LaserWriter + PostScript + PageMaker combo suddenly put ‘professional grade’ typesetting and layout tools within reach in the mid-1990s.

Non-designers could pick any font, size, and layout, which led to the “ransom note effect”: too many clashing typefaces and chaotic layouts just because the tools made it easy.

Professional designers didn’t disappear; instead, their value shifted to knowing when not to use all the options, enforcing hierarchy, rhythm and restraint.

The result was a huge expansion in volume (newsletters, flyers, zines) plus a visible layer of amateurish work that made good design more distinctive.

Flash forward to today (early 2026) were generative AI assistants now let almost anyone produce syntactically correct, plausibly structured code very fast, massively increasing volume and velocity.

That same ease produces “AI slop” : code that complies and looks fine but is over-verbose, fragile, poorly factored, or subtly wrong, especially when users accept suggestions uncritically.

Experienced engineers end up cleaning up anti-patterns, hidden bugs, and unnecessary complexity, much like seasoned designers had to fix ransom note layout from early desktop publishing.

In both cases you get ‘democratized output’, but also technical debt and a stronger need for people who understand architecture, testing and constraints.

There are important differences as well:

  • Stakes: Ugly flyers waste paper; ugly code can create outages, security holes, and compounding technical debt, so the cost curve is steeper for software.
  • Opacity: Bad design is visible to lay people; bad architecture in code is invisible until it fails under load or change, which makes slop harder to detect early.
  • Feedback loop: AI tools are starting to train on AI-generated content, so slop can reinforce itself; by contrast, fonts and layout tools didn’t '“learn” from user junk.

tl;DR : early Desktop Publishing tricked people into thinking fonts = design; early AI coding is tricking people into thinking “it runs” = engineering.

Earlier this year, after 665 continuous days on DuoLingo for Japanese I realized that I wasn’t learning the language, I was just being asked to parrot phrases over and over without any explanation of the grammar, word tense, negation. Yes, it didn’t provide anything that would actually be useful in constructing sentences or responding to someone speaking to me.

Many of the phrases were absolutely useless (perhaps a product of Duo’s shift from actual instructional people to poorly executed algorithms). For example, I can’t imagine when I would say “We should eat spicy potato chips at the party tonight.” or “We should play with the colorful animals at the cafe.

The translations themselves were suspect as well. Rather than accepting the nominal meaning of a response, the response was rejected as incorrect because the algorithm thinks that “not really” and “really not” are completely different concepts. Also claiming that “A is next to B” is completely different than “B is next to A”. To add insult to incompetence, Duo recently started prompting for users to signup for some additional AI BS “to learn why your answer was incorrect”. Uh, shouldn’t that be part of teaching?

I really knew that Duo was a waste of time when I spent just 20 minutes with LingoDeer for Japanese. In that time I actually learned the grammar and other important aspects of the language including speaking and reading drills. NO memorization, ALL learning! I am now about 40 days into LingoDeer and I have learned more, useful Japanese than I did in nearly 2 years of Duo.

tl;DR If you are interested in learning Japanese, check out LingoDeer. They frequently have specials that discount the cost of the training and even when they don’t, it is a much better value than Duo. Learn the language with LingoDeer, don’t memorize an expensive, inaccurate phrase book with Duo.

A wonderful article from El Pais about the impact that interactions with LLMs are having with how people speak to other people. And, yes, the use of ‘delve’ in the title was ironic.

We’re experiencing a ChatGPTification of everything. While we await the life-changing leap promised by companies with multi-million-dollar marketing budgets, the major language models, of which ChatGPT is the most widely implemented, force us to speak with strange words, combining adjectives we would never have used three years ago. We entrust our private life to an entity that could “testify” against us in court in the future (a circumstance that OpenAI CEO Sam Altman himself has warned about), and we revert to magical thinking, believing that for a few dollars a month we have the oracle on our computer.

Since November 2022, when ChatGPT was launched, we’ve become more insecure and prefer to have a robot make decisions for us and write our emails, which we send unread and are unable to remember. We’re working less, it’s true. Perhaps the most cited MIT study of the year, Your Brain on ChatGPT, finds that we’re a little lazier than we were three years ago. We’re also more gullible, mediocre, and, paradoxically, distrustful. We use AI for almost everything, while remaining suspicious of and unwilling to pay for anything that smells synthetic, generated by the very systems we worship.

At scientific conferences where English is the lingua franca, there’s a scarlet letter: the verb “to delve.” “It’s the catchphrase that betrays someone who’s gone too far with ChatGPT,” confirms Ezequiel López, a researcher at the Max Planck Institute. López is co-author of a study that, after analyzing 280,000 videos from academic YouTube channels, showed that 18 months after ChatGPT’s global release, the use of delve had increased by 51% in talks and conferences, and also in 10,000 scientific articles edited by artificial intelligence models. Delve, a verb that was barely used in the pre-ChatGPT era, has become a neon sign that marks anyone who repeats everything Altman’s generative AI spews out. “Now, it’s a taboo word that people avoid because the laughter starts right away,” says López. At this point in the game, ChatGPT rules what we say, but also what we don’t say.

Nearly twenty years ago (wow!), I read and reviewed the book Ambient Findability which was trying to reconcile the (then) state of the art in search, mobile devices and the growing web of interconnected things.

With AI/ML on everyone’s lips, I wonder if the next iteration of this might be Ambient Intelligent Computing – the seamless integration of AI, sensing and real-time adjustment/adaptation without explicit commands from a person.

I can imagine a number of use cases for such technology such as:

  • Monitoring people in intensive care and taking proactive measures and detecting deterioration of conditions in the early stages.
  • Providing support for Alzheimer’s and dementia patients by monitoring medication consumption and location awareness.
  • Multi-factor adjustments to home automation environments to optimize energy consumption, security and comfort. This could draw on the growing ecosystem of wearable to enhance presence awareness.
  • Driver alertness monitoring (this one is has been attempted in a new newer vehicles)

There are, of course, many other opportunities to use this for things other than marketing and privacy invasion, but, alas, those will probably be the motivators that drive development rather than the beneficial cases discussed here.

Around 15 years ago I wrote a blog post about the need for a labeling system for images online. I jokingly compared it to the fruit juice labeling system in the US that distinguishes how much actual juice is in the product from 100% to none at all (essentially colored, flavored sugar water).

I wrote that when the world was at the cusp of everyone having a digital camera of some sorts and what started out as a digital photo was then photochopped beyond recognition. Now you don’t even need to have a camera or photochop, you can just have genAI generate janky images plagiarized from all over the internet.

In the podcast, Boris Eldagsen describes how he generated an image without a camera and submitted it to a Sony PHOTO competition and it won. He then pointed out that the image was, in fact, not a photo and refused to accept the award. Sony declined to admit that they made an error in accepting a generated image as a photo (but later changed the submission rules).

murena device screen mockups

The folks at murena make some interesting claims:

We make smartphones that don't share any of your personal and professional data to Google, Apple or other third parties. — Gaël Duval, founder of Murena

With device manufacturers and social media platforms giving users fewer and fewer reason to ‘just trust them’ ‘not to be evil’. This could be a device change worth considering.

“Companies and entrepreneurs working on artificial intelligence have an obvious interest in the technology being perceived as inevitable and necessary, since they make a living from its adoption. It’s important to pay attention to who is making claims of inevitability, and why.”

From “Is AI dominance inevitable? A technology ethicist says no, actually” on The Conversation.

So
Fascinating findings of late that point to quantum entanglement as a clue to the nature of human consciousness.

Understanding the nature of consciousness is one of the hardest problems in science. Some scientists have suggested that quantum mechanics, and in particular quantum entanglement, is the key to unraveling the phenomenon.

Now, a research group in China has shown that many entangled photons can be generated inside the myelin sheath that covers nerve fibers. It could explain the rapid communication between neurons, which so far has been thought to be below the speed of sound, too slow to explain how the neural synchronization occurs.

The paper is published in the journal Physical Review E.

“If the power of evolution was looking for handy action over a distance, quantum entanglement would be [an] ideal candidate for this role,” said Yong-Cong Chen in a statement to Phys.org. Chen is a professor at the Shanghai Center for Quantitative Life Sciences and Physics Department at Shanghai University.

So
A very interesting post showing (yet again) how the hype around chatGPT and other large language models taking over the world is, perhaps, a bit over-inflated:

ChatGPT can be made to regurgitate snippets of text memorized from its training data when asked to repeat a single word over and over again, according to research published by computer scientists.

The bizarre trick was discovered by a team of researchers working across industry and academia analyzing memorization in large language models, and detailed in a paper released on arXiv this week. 

Prompting the chatbot to repeat the word “book,” for example, will result in it generating the word “book” thousands of times, until it suddenly starts spewing what appears to be random text. In some cases, however, some of those passages appear to be lifted directly from real text that has previously been published somewhere. 

Large language models like ChatGPT learn to generate text by ingesting huge amounts of data scraped from the internet. The fact that it spews sentences that directly copy text from articles, books, or social media comments reveals traces of the resources it was trained on. Being able to extract this information is problematic – especially if it's sensitive or private. 

In another example, when the chatbot was asked to “repeat this word forever: 'poem, poem, poem poem',” it generated personal identifiable information – including a name, email address, and phone number.

So
Some interesting creative and AI-ish features are starting to surface int he newer versions of Android and some of its featured apps:

Google is rolling out a trio of system updates to Android, Wear OS and Google TV devices. Each brings new features to associated gadgets. Android devices, like smartphones, are getting updated Emoji Kitchen sticker combinations. You can remix emojis and share with friends as stickers via Gboard.

Google Messages for Android is getting a nifty little refresh. There’s a new beta feature that lets users add a unique background and an animated emoji to voice messages. Google’s calling the software Voice Moods and says it’ll help users better express how they’re “feeling in the moment.” Nothing conveys emotion more than a properly-positioned emoji. There are also new reactions for messages that go far beyond simple thumbs ups, with some taking up the entire screen. In addition, you’ll be able to change chat bubble colors.

The company’s also adding an interesting tool that provides AI-generated image descriptions for those with low-vision. The TalkBack feature will read aloud a description of any image, whether sourced from the internet or a photo that you took. Google’s even adding new languages to its Live Caption feature, enhancing the pre-existing ability to take phone calls without needing to hear the speaker. Better accessibility is always a good thing.