“Companies and entrepreneurs working on artificial intelligence have an obvious interest in the technology being perceived as inevitable and necessary, since they make a living from its adoption. It’s important to pay attention to who is making claims of inevitability, and why.”
From “Is AI dominance inevitable? A technology ethicist says no, actually” on The Conversation.
‘AI’ Algorithms Aren’t People – Stop Testing Them as if They Are
So much unnecessary anthropomorphizing happening in the Machine Learning (aka Artificial Intelligence) space. From calling outright fabrications of ‘data’ ‘Hallucinations’ to claiming human emotions (“I’m sorry I couldn’t help with that….”) and giving human names to interfaces, the discussions in these areas continue to be muddied more than clarified.
When Taylor Webb played around with GPT-3 in early 2022, he was blown away by what OpenAI’s large language model appeared to be able to do. Here was a neural network trained only to predict the next word in a block of text—a jumped-up autocomplete. And yet it gave correct answers to many of the abstract problems that Webb set for it—the kind of thing you’d find in an IQ test. “I was really shocked by its ability to solve these problems,” he says. “It completely upended everything I would have predicted.”
Webb is a psychologist at the University of California, Los Angeles, who studies the different ways people and computers solve abstract problems. He was used to building neural networks that had specific reasoning capabilities bolted on. But GPT-3 seemed to have learned them for free.
Last month Webb and his colleagues published an article in Nature, in which they describe GPT-3’s ability to pass a variety of tests devised to assess the use of analogy to solve problems (known as analogical reasoning). On some of those tests GPT-3 scored better than a group of undergrads. “Analogy is central to human reasoning,” says Webb. “We think of it as being one of the major things that any kind of machine intelligence would need to demonstrate.”
What Webb’s research highlights is only the latest in a long string of remarkable tricks pulled off by large language models. For example, when OpenAI unveiled GPT-3’s successor, GPT-4, in March, the company published an eye-popping list of professional and academic assessments that it claimed its new large language model had aced, including a couple of dozen high school tests and the bar exam. OpenAI later worked with Microsoft to show that GPT-4 could pass parts of the United States Medical Licensing Examination.
And multiple researchers claim to have shown that large language models can pass tests designed to identify certain cognitive abilities in humans, from chain-of-thought reasoning (working through a problem step by step) to theory of mind (guessing what other people are thinking).
Such results are feeding a hype machine that predicts computers will soon come for white-collar jobs, replacing teachers, journalists, lawyers and more. Geoffrey Hinton has called out GPT-4’s apparent ability to string together thoughts as one reason he is now scared of the technology he helped create.
But there’s a problem: there is little agreement on what those results really mean. Some people are dazzled by what they see as glimmers of human-like intelligence; others aren’t convinced one bit.
“There are several critical issues with current evaluation techniques for large language models,” says Natalie Shapira, a computer scientist at Bar-Ilan University in Ramat Gan, Israel. “It creates the illusion that they have greater capabilities than what truly exists.”
https://www.technologyreview.com/2023/08/30/1078670/large-language-models-arent-people-lets-stop-testing-them-like-they-were
Worthy of Recognition and Praise – Jose Andres
Why is the media so focused on the most despicable, vile, self-serving garbage in society (rhymes with Melon Husk) when humble, dedicated people like Jose Andres actually works to help people in need?
Before chef José Andrés became famous for World Central Kitchen, he had already scaled the heights of his profession. His new cookbook celebrates the group’s humanitarian impact.
“I remember this Spanish guy screaming,” said chef-volunteer Karla Hoyos, describing the first time she met chef José Andrés. “He had just come from a meeting with FEMA [the US emergency management agency], and he was furious. And I thought, ‘Oh, no, no, nooo…’.” She shakes her head emphatically. “I am not going to deal with this person. I don’t care who he is.”
It was September 2017, shortly after Hoyos had arrived in Puerto Rico following Hurricane Maria, the deadly storm that devastated the island, killing nearly 3,000 people, making most roads impassable and knocking out 80% of the power grid. Several days earlier, Andrés had touched down with a team from his non-profit, World Central Kitchen (WCK), which he founded in 2010 after returning from Haiti where he fed survivors of a catastrophic earthquake. The organisation originally emphasised longer-term programmes – such as supporting nutritional training for young mothers – but after Maria, its efforts now focus on deploying an army of culinary first responders to feed people during and after the world’s worst disasters, natural or otherwise.
https://www.bbc.com/travel/article/20230911-jos-andrs-the-man-who-created-an-army-of-culinary-first-responders
AI Ethics Principles
Lots to chew on in this article.
I think the real challenge is in letting corporations set their own ‘standards’ you end up with a bunch of standards that only suit that particular corporation.
AI Ethics Not Being Taught to Data Scientist
This feels like an extension of ethics, in general, not being part of the curriculum in education.
Anaconda’s survey of data scientists from more than 100 countries found the ethics gap extends from academia to industry. While organizations can mitigate the problem through fairness tools and explainability solutions, neither appears to be gaining mass adoption.
Only 15% of respondents said their organization has implemented a fairness system, and just 19% reported they have an explainability tool in place.
The study authors warned that this could have far-reaching consequences:
Above and beyond the ethical concerns at play, a failure to proactively address these areas poses strategic risk to enterprises and institutions across competitive, financial, and even legal dimensions.
The survey also revealed concerns around the security of open-source tools and business training, and data drudgery. But it’s the disregard of ethics that most troubled the study authors:
Of all the trends identified in our study, we find the slow progress to address bias and fairness, and to make machine learning explainable the most concerning. While these two issues are distinct, they are interrelated, and both pose important questions for society, industry, and academia.
While businesses and academics are increasingly talking about AI ethics, their words mean little if they don’t turn into actions.
IEEE Ethical Design Initiative
A three-year effort by hundreds of engineers worldwide resulted in the publication in March of 2019 of Ethically Aligned Design (EAD) for Business, a guide for policymakers, engineers, designers, developers and corporations. The effort was headed by the IEEE Global Initiative on Ethics of Autonomous and Intelligent Systems (A/IS), with John C. Havens as Executive Director, who spoke to AI Trends for an Executive Interview. We recently connected to ask how the effort has been going. Here is an update.
EAD First Edition, a 290-page document which Havens refers to as “applied ethics,†has seen some uptake, for example by IBM, which referred to the IEEE effort within their own resource called Everyday Ethics for AI  The IBM document is 26 pages, easy to digest, structured into five areas of focus, each with recommended action steps and an example. The example for Accountability involved an AI team developing applications for a hotel. Among the recommendations was: enable guests to turn the AI off, conduct face-to-face interviews to help develop requirements; and, institute a feedback learning loop.
The OECD (Organization for Economic Cooperation and Development) issued a paper after the release of an earlier version of EAD attesting to the close affinity between the IEEE’s work and the OECD Principles on AI. The OECD cited as shared values “the need for such systems to primarily serve human well-being through inclusive and sustainable growth; to respect human-centered values and fairness; and to be robust, safe and dependable, including through transparency, explainability and accountability.â€
AI Transparency and Fairness
A post on efforts to further bolster AI transparency and fairness by the AI World Society.
Learning algorithms find patterns in data they are given. However, in the processes by which the data is collected, relevant variables are defined and hypotheses are formulated that may depend on structural unfairness found in society, the paper suggests.
“Algorithms based on such data could introduce or perpetuate a variety of discriminatory biases, thereby maintaining a cycle of injustice,†the authors state. “The community within statistics and machine learning that works on issues of fairness in data analysis have taken a variety of approaches to defining fairness formally, with the aim of ultimately ensuring that learning algorithms are fair.â€
The paper poses some tough questions. For instance, “Since, unsurprisingly, learning algorithms that use unfair data can lead to biased or unfair conclusions, two questions immediately suggest themselves. First, what does it mean for a world and data that comes from this world to be fair? And second, if data is indeed unfair, what adjustments must be made to learning algorithms that use this data as input to produce fairer outputs?â€
Cause and effect is a challenging area of statistics; correlation does not imply causation, the experts say. Teasing out causality often involved obtaining data in a carefully controlled way. An early example is the work done by James Lindt for the Royal Navy, when scurvy among sailors was a health crisis. Lindt organized what later came to be viewed as one of the first instances of a clinical trial. He arranged 12 sailors into six pairs, and gave each pair one of six scurvy treatments thought at the time to be effective. Of the treatments, only citrus was effective. That led to citrus products being issued on all Royal Navy ships.
Whether fairness can be defined by computer scientists and engineers is an open question. “Issues of fairness and justice have occupied the ethical, legal, and political literature for centuries. While many general principles are known, such as fairness-as-proportionality, just compensation, and social equality, general definitions have proven elusive,†the paper states.
Moreover, “Indeed, a general definition may not be possible since notions of fairness are ultimately rooted in either ethical principle or ethical intuition, and both principles and intuitions may conflict.â€
Mediation analysis is one approach to making algorithms more fair. Needless to say, the work is continuing.
Trustworthy AI Framework
An interesting article on business challenges with artificial intelligence.
Artificial intelligence (AI) technology continues to advance by leaps and bounds and is quickly becoming a potential disrupter and essential enabler for nearly every company in every industry. At this stage, one of the barriers to widespread AI deployment is no longer the technology itself; rather, it’s a set of challenges that ironically are far more human: ethics, governance, and human values.
As AI expands into almost every aspect of modern life, the risks of misbehaving AI increase exponentially—to a point where those risks can literally become a matter of life and death. Real-world examples of AI gone awry include systems that discriminate against people based on their race, age, or gender and social media systems that inadvertently spread rumors and disinformation and more.
Even worse, these examples are just the tip of the iceberg. As AI is deployed on a larger scale, the associated risks will likely only increase—potentially having serious consequences for society at large, and even greater consequences for the companies responsible. From a business perspective, these potential consequences include everything from lawsuits, regulatory fines, and angry customers to embarrassment, reputation damage, and destruction of shareholder value.
Yet with AI now becoming a required business capability—not just a “nice to haveâ€â€”companies no longer have the option to avoid AI’s unique risks simply by avoiding AI altogether. Instead, they must learn how to identify and manage AI risks effectively. In order to achieve the potential of human and machine collaboration, organizations need to communicate a plan for AI that is adopted and spoken from the mailroom to the boardroom. By having an ethical framework in place, organizations create a common language by which to articulate trust and help ensure integrity of data among all of their internal and external stakeholders. Having a common framework and lens to apply the governance and management of risks associated with AI consistently across the enterprise can enable faster, and more consistent adoption of AI.