mobrec

My Personal Infocloud

So
BMW recently made the misguided decision to charge a subscription for heated seats in some markets. Aside from the fact that the average BMW is a maintenance disaster and is nearly undriveable by its 3rd year, what is the point of charging a subscription for a fairly standard car feature? Simple greed.

Consider also that every one of those vehicles has the added expense of being fitted with the heating elements and controls that may not be enabled. That seems like wasted money unless the consumer is actually paying for the feature twice: once when it is priced into the cost of manufacturing the car and *again* when the consumer actually tries to use it.

The other problem with these feature flags hidden in your car is that not only BMW can have access to them. It is not a great leap to imagine a new form of ransomware where you have to pay BMW and then some hacker to re-re-re-enable a feature on your vehicle.

So
> “I use multi-factor authentication on every web site that I can – that way no one can track me.” > > Yeah, I am pretty sure that isn't how that works, self-proclaimed cyber security 'expert' on a podcast

So
This is a bit of a departure from the tech-oriented articles that I usually post but I thought I would just “put this out there” because I couldn't find any relevant info when I had this issue with this product.

Basically, the Jotul GV 370 DV is a gas stove (the kind you use to heat a room, not to cook on). It was installed and minimally tested for gas flow. The problem was, it wouldn't stay lit when we tried to use it for the first time. The pattern was, the pilot would light, 30-90 seconds would go by, the stove would flash combust (the gas in the chamber would ignite) rather forcefully but not stay on. This cycle would continue until a red LED on the front of the control unit started flashing. At this point, I turned off the stove and turned of the gas and started looking for answers.

I read and re-read the installation docs that came with the Jotul GV 370 DV. One thing I noticed is that the damper setting on it was not correct for the amount of vent pipe that was installed. That was an easy fix. Unfortunately, it didn't solve the problem (or change it at all). Internet searches didn't reveal any additional useful information, just a couple of edge cases and people arguing philosophy rather than practice solutions.

I even tried the justAnswers web site. Paid $5 for a 'trial membership' and was connected to an absolutely useless 'expert' who just tried to read me the online posts I had found via google. His final bit of 'expert advice' was to get a voltage meter, disassemble the stove and tell him what all the voltage readings were on all the stove components. Absolutely pointless exercise. I thanked him for wasting my time and requested a refund from justAnswers.

At this point, I elected to take the glass off the front of the unit and inspect the burning media bed to make sure the gas jets weren't blocked or obstructed. This is when I noticed that the gas bed was out of alignment with the pilot starter. I removed the gas bed tray, re-seated it so that the notch for the pilot was centered on the pilot (instead of all the way to the right like it was when i originally opened it up). After this adjustment, I carefully placed the gas bed media back on the pan, reassembled the glass and turned the gas back on.

When I turned on the stove, the pilot came on, 30 seconds later the perimeter of the bed lit, went out, re-lit, then stayed on. Success! All that because of a one centimeter misalignment of the gas bed with the pilot.

So there you have it. Hopefully this will help someone else who has this issue quickly solve the problem without delay or 'expert' help.

So
This article reinforces what I have been saying for years: microservices are a big mistake, especially for developers who don't understand distributed systems, high availability and observability. To be successful, they must be properly designed and implemented, unlike most of the copy-and-paste, we-don't-need-no-stinkin-design development that is seen today.

From the article:

We engineers have an affliction. It’s called “wanting to use the latest tech because it sounds cool, even though it’s technically more difficult.” Got that from the doctor’s office, it’s 100% legit. The diagnosis was written on my prescription for an over-the-counter monolith handbook. From 2004. Seriously though, we do this all the time. Every time something cool happens, we flock to it like moths to a campfire. And more often than not, we get burned.

So
This feels like an extension of ethics, in general, not being part of the curriculum in education.

Anaconda’s survey of data scientists from more than 100 countries found the ethics gap extends from academia to industry. While organizations can mitigate the problem through fairness tools and explainability solutions, neither appears to be gaining mass adoption.

Only 15% of respondents said their organization has implemented a fairness system, and just 19% reported they have an explainability tool in place.

The study authors warned that this could have far-reaching consequences:

Above and beyond the ethical concerns at play, a failure to proactively address these areas poses strategic risk to enterprises and institutions across competitive, financial, and even legal dimensions.

The survey also revealed concerns around the security of open-source tools and business training, and data drudgery. But it’s the disregard of ethics that most troubled the study authors:

Of all the trends identified in our study, we find the slow progress to address bias and fairness, and to make machine learning explainable the most concerning. While these two issues are distinct, they are interrelated, and both pose important questions for society, industry, and academia.

While businesses and academics are increasingly talking about AI ethics, their words mean little if they don’t turn into actions.

So

I think these lessons have application beyond the Cloud and AWS

So
Feels a bit overstated, but an interesting read on AutoML and its potential impacts on Data Science (and scientists)

There’s a good reason for all the AutoML hype: AutoML is a must-have for many organizations.

Let’s take the example of Salesforce. They explain that their “customers are looking to predict a host of outcomes — from customer churn, sales forecasts and lead conversions to email marketing click throughs, website purchases, offer acceptances, equipment failures, late payments, and much more.”

In short, ML is ubiquitous. However, for ML to be effective for each unique customer, they would “have to build and deploy thousands of personalized machine learning models trained on each individual customer’s data for every single use case” and “the only way to achieve this without hiring an army of data scientists is through automation.”

While many people see AutoML as a way to bring ease-of-use and efficiency to ML, the reality is that for many enterprise applications, there’s just no other way to do it. A company like Facebook or Salesforce or Google can’t hire data scientists to build custom models for each of their billions of users, so they automate ML instead, enabling unique models at scale.

The amount of ML components that are automated depends on the platform, but with Salesforce, it includes feature inference, automated feature engineering, automated feature validation, automated model selection, and hyperparameter optimization.

That’s a mouthful.

What this means is that data scientists can deploy thousands of models in production, with far less grunt work and hand-tuning, reducing turn-around-time drastically.

By shifting the work from data crunching towards more meaningful analytics, AutoML enables more creative, business-focused applications of data science.

So
> A three-year effort by hundreds of engineers worldwide resulted in the publication in March of 2019 of Ethically Aligned Design (EAD) for Business, a guide for policymakers, engineers, designers, developers and corporations. The effort was headed by the IEEE Global Initiative on Ethics of Autonomous and Intelligent Systems (A/IS), with John C. Havens as Executive Director, who spoke to AI Trends for an Executive Interview. We recently connected to ask how the effort has been going. Here is an update.

EAD First Edition, a 290-page document which Havens refers to as “applied ethics,” has seen some uptake, for example by IBM, which referred to the IEEE effort within their own resource called Everyday Ethics for AI  The IBM document is 26 pages, easy to digest, structured into five areas of focus, each with recommended action steps and an example. The example for Accountability involved an AI team developing applications for a hotel. Among the recommendations was: enable guests to turn the AI off, conduct face-to-face interviews to help develop requirements; and, institute a feedback learning loop.

The OECD (Organization for Economic Cooperation and Development) issued a paper after the release of an earlier version of EAD attesting to the close affinity between the IEEE’s work and the OECD Principles on AI. The OECD cited as shared values “the need for such systems to primarily serve human well-being through inclusive and sustainable growth; to respect human-centered values and fairness; and to be robust, safe and dependable, including through transparency, explainability and accountability.”

So
Teaching algorithms to create novel algorithms...

Artificial intelligence (AI) is evolving—literally. Researchers have created software that borrows concepts from Darwinian evolution, including “survival of the fittest,” to build AI programs that improve generation after generation without human input. The program replicated decades of AI research in a matter of days, and its designers think that one day, it could discover new approaches to AI.

“While most people were taking baby steps, they took a giant leap into the unknown,” says Risto Miikkulainen, a computer scientist at the University of Texas, Austin, who was not involved with the work. “This is one of those papers that could launch a lot of future research.”

Building an AI algorithm takes time. Take neural networks, a common type of machine learning used for translating languages and driving cars. These networks loosely mimic the structure of the brain and learn from training data by altering the strength of connections between artificial neurons. Smaller subcircuits of neurons carry out specific tasks—for instance spotting road signs—and researchers can spend months working out how to connect them so they work together seamlessly.

In recent years, scientists have sped up the process by automating some steps. But these programs still rely on stitching together ready-made circuits designed by humans. That means the output is still limited by engineers’ imaginations and their existing biases.

So Quoc Le, a computer scientist at Google, and colleagues developed a program called AutoML-Zero that could develop AI programs with effectively zero human input, using only basic mathematical concepts a high school student would know. “Our ultimate goal is to actually develop novel machine learning concepts that even researchers could not find,” he says.

So
A post on efforts to further bolster AI transparency and fairness by the AI World Society.

Learning algorithms find patterns in data they are given. However, in the processes by which the data is collected, relevant variables are defined and hypotheses are formulated that may depend on structural unfairness found in society, the paper suggests.

“Algorithms based on such data could introduce or perpetuate a variety of discriminatory biases, thereby maintaining a cycle of injustice,” the authors state. “The community within statistics and machine learning that works on issues of fairness in data analysis have taken a variety of approaches to defining fairness formally, with the aim of ultimately ensuring that learning algorithms are fair.”

The paper poses some tough questions. For instance, “Since, unsurprisingly, learning algorithms that use unfair data can lead to biased or unfair conclusions, two questions immediately suggest themselves. First, what does it mean for a world and data that comes from this world to be fair? And second, if data is indeed unfair, what adjustments must be made to learning algorithms that use this data as input to produce fairer outputs?”

Cause and effect is a challenging area of statistics; correlation does not imply causation, the experts say. Teasing out causality often involved obtaining data in a carefully controlled way. An early example is the work done by James Lindt for the Royal Navy, when scurvy among sailors was a health crisis. Lindt organized what later came to be viewed as one of the first instances of a clinical trial. He arranged 12 sailors into six pairs, and gave each pair one of six scurvy treatments thought at the time to be effective. Of the treatments, only citrus was effective. That led to citrus products being issued on all Royal Navy ships.

Whether fairness can be defined by computer scientists and engineers is an open question. “Issues of fairness and justice have occupied the ethical, legal, and political literature for centuries. While many general principles are known, such as fairness-as-proportionality, just compensation, and social equality, general definitions have proven elusive,” the paper states.

Moreover, “Indeed, a general definition may not be possible since notions of fairness are ultimately rooted in either ethical principle or ethical intuition, and both principles and intuitions may conflict.”

Mediation analysis is one approach to making algorithms more fair. Needless to say, the work is continuing.