Engineers tend to see things in unambiguous terms, which some may call Black and White terms, such as a choice between right or wrong and good and bad. The consideration of ethics in AI is highly nuanced, with vast gray areas, making it challenging for AI software engineers to apply it in their work.
That was a takeaway from a session on the Future of Standards and Ethical AI at the AI World Government conference held in-person and virtually in Alexandria, Va. this week.
An overall impression from the conference is that the discussion of AI and ethics is happening in virtually every quarter of AI in the vast enterprise of the federal government, and the consistency of points being made across all these different and independent efforts stood out.
“We engineers often think of ethics as a fuzzy thing that no one has really explained,” stated Beth-Anne Schuelke-Leech, an associate professor, Engineering Management and Entrepreneurship at the University of Windsor, Ontario, Canada, speaking at the Future of Ethical AI session. “It can be difficult for engineers looking for solid constraints to be told to be ethical. That becomes really complicated because we don’t know what it really means.”
Schuelke-Leech started her career as an engineer, then decided to pursue a PhD in public policy, a background which enables her to see things as an engineer and as a social scientist. “I got a PhD in social science, and have been pulled back into the engineering world where I am involved in AI projects, but based in a mechanical engineering faculty,” she said.
An engineering project has a goal, which describes the purpose, a set of needed features and functions, and a set of constraints, such as budget and timeline “The standards and regulations become part of the constraints,” she said. “If I know I have to comply with it, I will do that. But if you tell me it’s a good thing to do, I may or may not adopt that.”
Schuelke-Leech also serves as chair of the IEEE Society’s Committee on the Social Implications of Technology Standards. She commented, “Voluntary compliance standards such as from the IEEE are essential from people in the industry getting together to say this is what we think we should do as an industry.”
Some standards, such as around interoperability, do not have the force of law but engineers comply with them, so their systems will work. Other standards are described as good practices, but are not required to be followed. “Whether it helps me to achieve my goal or hinders me getting to the objective, is how the engineer looks at it,” she said.
The Pursuit of AI Ethics Described as “Messy and Difficult”
Sara Jordan, senior counsel with the Future of Privacy Forum, in the session with Schuelke-Leech, works on the ethical challenges of AI and machine learning and is an active member of the IEEE Global Initiative on Ethics and Autonomous and Intelligent Systems. “Ethics is messy and difficult, and is context-laden. We have a proliferation of theories, frameworks and constructs,” she said, adding, “The practice of ethical AI will require repeatable, rigorous thinking in context.”
Schuelke-Leech offered, “Ethics is not an end outcome. It is the process being followed. But I’m also looking for someone to tell me what I need to do to do my job, to tell me how to be ethical, what rules I’m supposed to follow, to take away the ambiguity.”
“Engineers shut down when you get into funny words that they don’t understand, like ‘ontological,’ They’ve been taking math and science since they were 13-years-old,” she said.
She has found it difficult to get engineers involved in attempts to draft standards for ethical AI. “Engineers are missing from the table,” she said. “The debates about whether we can get to 100% ethical are conversations engineers do not have.”
She concluded, “If their managers tell them to figure it out, they will do so. We need to help the engineers cross the bridge halfway. It is essential that social scientists and engineers don’t give up on this.”
Leader’s Panel Described Integration of Ethics into AI Development Practices
The topic of ethics in AI is coming up more in the curriculum of the US Naval War College of Newport, R.I., which was established to provide advanced study for US Navy officers and now educates leaders from all services. Ross Coffey, a military professor of National Security Affairs at the institution, participated in a Leader’s Panel on AI, Ethics and Smart Policy at AI World Government.
“The ethical literacy of students increases over time as they are working with these ethical issues, which is why it is an urgent matter because it will take a long time,” Coffey said.
Panel member Carole Smith, a senior research scientist with Carnegie Mellon University who studies human-machine interaction, has been involved in integrating ethics into AI systems development since 2015. She cited the importance of “demystifying” AI.
“My interest is in understanding what kind of interactions we can create where the human is appropriately trusting the system they are working with, not over- or under-trusting it,” she said, adding, “In general, people have higher expectations than they should for the systems.”
As an example, she cited the Tesla Autopilot features, which implement self-driving car capability to a degree but not completely. “People assume the system can do a much broader set of activities than it was designed to do. Helping people understand the limitations of a system is important. Everyone needs to understand the expected outcomes of a system and what some of the mitigating circumstances might be,” she said.
Panel member Taka Ariga, the first chief data scientist appointed to the US Government Accountability Office and director of the GAO’s Innovation Lab, sees a gap in AI literacy for the young workforce coming into the federal government. “Data scientist training does not always include ethics. Accountable AI is a laudable construct, but I’m not sure everyone buys into it. We need their responsibility to go beyond technical aspects and be accountable to the end user we are trying to serve,” he said.
Panel moderator Alison Brooks, PhD, research VP of Smart Cities and Communities at the IDC market research firm, asked whether principles of ethical AI can be shared across the boundaries of nations.
“We will have a limited ability for every nation to align on the same exact approach, but we will have to align in some ways on what we will not allow AI to do, and what people will also be responsible for,” stated Smith of CMU.
The panelists credited the European Commission for being out front on these issues of ethics, especially in the enforcement realm.
Ross of the Naval War Colleges acknowledged the importance of finding common ground around AI ethics. “From a military perspective, our interoperability needs to go to a whole new level. We need to find common ground with our partners and our allies on what we will allow AI to do and what we will not allow AI to do.” Unfortunately, “I don’t know if that discussion is happening,” he said.
Discussion on AI ethics could perhaps be pursued as part of certain existing treaties, Smith suggested
The many AI ethics principles, frameworks, and road maps being offered in many federal agencies can be challenging to follow and be made consistent. Take said, “I am hopeful that over the next year or two, we will see a coalescing.”
For more information and access to recorded sessions, go to AI World Government.
AI is more accessible to young people in the workforce who grew up as ‘digital natives’ with Alexa and self-driving cars as part of the landscape, giving them expectations grounded in their experience of what is possible.
That idea set the foundation for a panel discussion at AI World Government on Mindset Needs and Skill Set Myths for AI engineering teams, held this week virtually and in-person in Alexandria, Va.
“People feel that AI is within their grasp because the technology is available, but the technology is ahead of our cultural maturity,” said panel member Dorothy Aronson, CIO and Chief Data Officer for the National Science Foundation. “It’s like giving a sharp object to a child. We might have access to big data, but it might not be the right thing to do,” to work with it in all cases.
Things are accelerating, which is raising expectations. When panel member Vivek Rao, lecturer and researcher at the University of California at Berkeley, was working on his PhD, a paper on natural language processing might be a master’s thesis. “Now we assign it as a homework assignment with a two-day turnaround. We have an enormous amount of compute power that was not available even two years ago,” he said of his students, who he described as “digital natives” with high expectations of what AI makes possible.
Panel moderator Rachel Dzombak, digital transformation lead at the Software Engineering Institute of Carnegie Mellon University, asked the panelists what is unique about working on AI in the government.
Aronson said the government cannot get too far ahead with the technology, or the users will not know how to interact with it. “We’re not building iPhones,” she said. “We have experimentation going on, and we are always looking ahead, anticipating the future, so we can make the most cost-effective decisions. In the government right now, we are seeing the convergence of the emerging generation and the close-to-retiring generation, who we also have to serve.”
Early in her career, Aronson did not want to work in the government. “I thought it meant you were either in the armed services or the Peace Corps,” she said. “But what I learned after a while is what motivates federal employees is service to larger, problem-solving institutions. We are trying to solve really big problems of equity and diversity, and getting food to people and keeping people safe. People that work for the government are dedicated to those missions.”
She referred to her two children in their 20s, who like the idea of service, but in “tiny chunks,” meaning, “They don’t look at the government as a place where they have freedom, and they can do whatever they want. They see it as a lockdown situation. But it’s really not.”
Berkeley Students Learn About Role of Government in Disaster Response
Rao of Berkeley said his students are seeing wildfires in California and asking who is working on the challenge of doing something about them. When he tells them it is almost always local, state and federal government entities, “Students are generally surprised to find that out.”
In one example, he developed a course on innovation in disaster response, in collaboration with CMU and the Department of Defense, the Army Futures Lab and Coast Guard search and rescue. “This was eye-opening for students,” he said. At the outset, two of 35 students expressed interest in a federal government career. By the end of the course, 10 of the 35 students were expressing interest. One of them was hired by the Naval Surface Warfare Center outside Corona, Calif. as a software engineer, Rao said.
Aronson described the process of bringing on new federal employees as a “heavy lift,” suggesting, “if we could prepare in advance, it would move a lot faster.”
Asked by Dzombak what skill sets and mindsets are seen as essential to AI engineering teams, panel member Bryan Lane, director of Data & AI at the General Services Administration (who announced during the session that he is taking on a new role at FDIC), said resiliency is a necessary quality.
Lane is a technology executive within the GSA IT Modernization Centers of Excellence (CoE) with over 15 years of experience leading advanced analytics and technology initiatives. He has led the GSA partnership with the DoD Joint Artificial Intelligence Center (JAIC). [Ed. Note: Known as “the Jake.”] Lane also is the founder of DATA XD. He also has experience in industry, managing acquisition portfolios.
“The most important thing about resilient teams going on an AI journey is that you need to be ready for the unexpected, and the mission persists,” he said. “If you are all aligned on the importance of the mission, the team can be held together.”
Good Sign that Team Members Acknowledge Having “Never Done This Before”
Regarding mindset, he said more of his team members are coming to him and saying, “I’ve never done this before.” He sees that as a good sign that offers an opportunity to talk about risk and alternative solutions. “When your team has the psychological safety to say that they don’t know something,” Lane sees it as positive. “The focus is always on what you have done and what you have delivered. Rarely is the focus on what you have not done before and what you want to grow into,” he said,
Aronson has found it challenging to get AI projects off the ground. “It’s hard to tell management that you have a use case or problem to solve and want to go at it, and there is a 50-50 chance it will get done, and you don’t know how much it’s going to cost,” she said. “It comes down to articulating the rationale and convincing others it’s the right thing to do to move forward.”
Rao said he talks to students about experimentation and having an experimental mindset. “AI tools can be easily accessible, but they can mask the challenges you can encounter. When you apply the vision API, for example in the context of challenges in your business or government agency, things may not be smooth,” he said.
Moderator Dzombak asked the panelists how they build teams. Arson said, “You need a mix of people.” She has tried “communities of practice” around solving specific problems, where people can come and go. “You bring people together around a problem and not a tool,” she said.
Lane seconded this. “I really have stopped focusing on tools in general,” he said. He ran experiments at JAIC in accounting, finance and other areas. “We found it’s not really about the tools. It’s about getting the right people together to understand the problems, then looking at the tools available,” he said.
Lane said he sets up “cross-functional teams” that are “a little more formal than a community of interest.” He has found them to be effective for working together on a problem for maybe 45 days. He also likes working with customers of the needed services inside the organization, and has seen customers learn about data management and AI as a result. “We will pick up one or two along the way who become advocates for accelerating AI throughout the organization,” Lane said.
Lane sees it taking five years to work out proven methods of thinking, working, and best practices for developing AI systems to serve the government. He mentioned The Opportunity Project (TOP) of the US Census Bureau, begun in 2016 to work on challenges such as ocean plastic pollution, COVID-19 economic recovery and disaster response. TOP has engaged in over 135 public-facing projects in that time, and has over 1,300 alumni including developers, designers, community leaders, data and policy experts, students and government agencies.
“It’s based on a way of thinking and how to organize work,” Lane said. “We have to scale the model of delivery, but five years from now, we will have enough proof of concept to know what works and what does not.”
Two experiences of how AI developers within the federal government are pursuing AI accountability practices were outlined at the AI World Government event held virtually and in-person this week in Alexandria, Va.
Taka Ariga, chief data scientist and director at the US Government Accountability Office, described an AI accountability framework he uses within his agency and plans to make available to others.
And Bryce Goodman, chief strategist for AI and machine learning at the Defense Innovation Unit (DIU), a unit of the Department of Defense founded to help the US military make faster use of emerging commercial technologies, described work in his unit to apply principles of AI development to terminology that an engineer can apply.
Ariga, the first chief data scientist appointed to the US Government Accountability Office and director of the GAO’s Innovation Lab, discussed an AI Accountability Framework he helped to develop by convening a forum of experts in the government, industry, nonprofits, as well as federal inspector general officials and AI experts.
“We are adopting an auditor’s perspective on the AI accountability framework,” Ariga said. “GAO is in the business of verification.”
The effort to produce a formal framework began in September 2020 and included 60% women, 40% of whom were underrepresented minorities, to discuss over two days. The effort was spurred by a desire to ground the AI accountability framework in the reality of an engineer’s day-to-day work. The resulting framework was first published in June as what Ariga described as “version 1.0.”
Seeking to Bring a “High-Altitude Posture” Down to Earth
“We found the AI accountability framework had a very high-altitude posture,” Ariga said. “These are laudable ideals and aspirations, but what do they mean to the day-to-day AI practitioner? There is a gap, while we see AI proliferating across the government.”
“We landed on a lifecycle approach,” which steps through stages of design, development, deployment and continuous monitoring. The development effort stands on four “pillars” of Governance, Data, Monitoring and Performance.
Governance reviews what the organization has put in place to oversee the AI efforts. “The chief AI officer might be in place, but what does it mean? Can the person make changes? Is it multidisciplinary?” At a system level within this pillar, the team will review individual AI models to see if they were “purposely deliberated.”
For the Data pillar, his team will examine how the training data was evaluated, how representative it is, and is it functioning as intended.
For the Performance pillar, the team will consider the “societal impact” the AI system will have in deployment, including whether it risks a violation of the Civil Rights Act. “Auditors have a long-standing track record of evaluating equity. We grounded the evaluation of AI to a proven system,” Ariga said.
Emphasizing the importance of continuous monitoring, he said, “AI is not a technology you deploy and forget.” he said. “We are preparing to continually monitor for model drift and the fragility of algorithms, and we are scaling the AI appropriately.” The evaluations will determine whether the AI system continues to meet the need “or whether a sunset is more appropriate,” Ariga said.
He is part of the discussion with NIST on an overall government AI accountability framework. “We don’t want an ecosystem of confusion,” Ariga said. “We want a whole-government approach. We feel that this is a useful first step in pushing high-level ideas down to an altitude meaningful to the practitioners of AI.”
DIU Assesses Whether Proposed Projects Meet Ethical AI Guidelines
At the DIU, Goodman is involved in a similar effort to develop guidelines for developers of AI projects within the government.
Projects Goodman has been involved with implementation of AI for humanitarian assistance and disaster response, predictive maintenance, to counter-disinformation, and predictive health. He heads the Responsible AI Working Group. He is a faculty member of Singularity University, has a wide range of consulting clients from inside and outside the government, and holds a PhD in AI and Philosophy from the University of Oxford.
The DOD in February 2020 adopted five areas of Ethical Principles for AI after 15 months of consulting with AI experts in commercial industry, government academia and the American public. These areas are: Responsible, Equitable, Traceable, Reliable and Governable.
“Those are well-conceived, but it’s not obvious to an engineer how to translate them into a specific project requirement,” Good said in a presentation on Responsible AI Guidelines at the AI World Government event. “That’s the gap we are trying to fill.”
Before the DIU even considers a project, they run through the ethical principles to see if it passes muster. Not all projects do. “There needs to be an option to say the technology is not there or the problem is not compatible with AI,” he said.
All project stakeholders, including from commercial vendors and within the government, need to be able to test and validate and go beyond minimum legal requirements to meet the principles. “The law is not moving as fast as AI, which is why these principles are important,” he said.
Also, collaboration is going on across the government to ensure values are being preserved and maintained. “Our intention with these guidelines is not to try to achieve perfection, but to avoid catastrophic consequences,” Goodman said. “It can be difficult to get a group to agree on what the best outcome is, but it’s easier to get the group to agree on what the worst-case outcome is.”
The DIU guidelines along with case studies and supplemental materials will be published on the DIU website “soon,” Goodman said, to help others leverage the experience.
Here are Questions DIU Asks Before Development Starts
The first step in the guidelines is to define the task. “That’s the single most important question,” he said. “Only if there is an advantage, should you use AI.”
Next is a benchmark, which needs to be set up front to know if the project has delivered.
Next, he evaluates ownership of the candidate data. “Data is critical to the AI system and is the place where a lot of problems can exist.” Goodman said. “We need a certain contract on who owns the data. If ambiguous, this can lead to problems.”
Next, Goodman’s team wants a sample of data to evaluate. Then, they need to know how and why the information was collected. “If consent was given for one purpose, we cannot use it for another purpose without re-obtaining consent,” he said.
Next, the team asks if the responsible stakeholders are identified, such as pilots who could be affected if a component fails.
Next, the responsible mission-holders must be identified. “We need a single individual for this,” Goodman said. “Often we have a tradeoff between the performance of an algorithm and its explainability. We might have to decide between the two. Those kinds of decisions have an ethical component and an operational component. So we need to have someone who is accountable for those decisions, which is consistent with the chain of command in the DOD.”
Finally, the DIU team requires a process for rolling back if things go wrong. “We need to be cautious about abandoning the previous system,” he said.
Once all these questions are answered in a satisfactory way, the team moves on to the development phase.
In lessons learned, Goodman said, “Metrics are key. And simply measuring accuracy might not be adequate. We need to be able to measure success.”
Also, fit the technology to the task. “High risk applications require low-risk technology. And when potential harm is significant, we need to have high confidence in the technology,” he said.
Another lesson learned is to set expectations with commercial vendors. “We need vendors to be transparent,” he said. ”When someone says they have a proprietary algorithm they cannot tell us about, we are very wary. We view the relationship as a collaboration. It’s the only way we can ensure that the AI is developed responsibly.”
Lastly, “AI is not magic. It will not solve everything. It should only be used when necessary and only when we can prove it will provide an advantage.”
Advances in the AI behind speech recognition are driving growth in the market, attracting venture capital and funding startups, posing challenges to established players.
The growing acceptance and use of speech recognition devices are driving the market, which according to an estimate by Meticulous Research is expected to reach $26.8 billion globally by 2025, according to a recent account in Analytics Insight. Better speed and accuracy are among the benefits of the evolving technology.
One company in the throes of this new growth, AssemblyAI of San Francisco, is offering an API for speech recognition capable of transcribing videos, podcasts, phone calls, and remote meetings. The company was founded by CEO Dylan Fox in 2017 and has received backing from Y Combinator, a startup accelerator, as well as NVIDIA.
Fox has an unusual background for a high tech entrepreneur. He is a graduate of George Washington University with a degree in business administration, business economics, and public policy. He got a job as a software engineer for machine learning in the emerging product lab of Cisco in San Francisco, working on deep neural networks and machine learning. He got the idea for AssemblyAi and attracted capital from Y Combinator, which enabled him to hire data scientists and data engineers to get the technology off the ground.
Asked in an interview with AI Trends how he made this transition from undergrad in business administration and economics to high-tech entrepreneur, Fox said, “I taught myself how to program, which led me to a path of machine learning. I was looking for a harder software challenge, which led to natural language processing, which took me to Cisco.” They were working on Siri for the Enterprise for Apple at the time,
To speed up the work, Cisco was looking to acquire speech recognition software; Fox was in the catbird’s seat for the search. “We looked at Nuance,” for example, acknowledged as a market leader and owner of more speech recognition software than its competitors. (The acquisition of Nuance by Microsoft for $19.6 billion is expected to be finalized by year-end.) The young, budding entrepreneur was not impressed. “It was crazy how bad all the options were from an accuracy and a developer point of view,” he stated.
He was impressed by Twilio, a San Francisco-based company founded in 2008, which that year released the Twilio Voice API to make and receive phone calls hosted in the cloud. The company has since raised $103 million in venture capital. “They were setting new standards for a good API for developers,” Fox said.
Fox’s idea was to use AI and machine learning to achieve “super accurate results, and make it easy for developers to incorporate the API into their products. One customer is CallRail, offering call tracking and marketing analytics software, which plans to incorporate AssembyAI’s API to gain insight into why people are calling. Other customers include NBC and the Wall Street Journal, using the product to transcribe content and interviews, and provide closed captioning.
“We’ve been working on building as close to human speech recognition quality as possible. It’s been a lot of work” Fox said. He expects to reach that plateau in 2022.
He targets companies incorporating speech recognition into their products and makes it easy to buy. Customers pay on a usage basis; for every second of audio transcribed, AssemblyAI charges a fraction of a penny. Clients get billed monthly. If a customer uses 10 hours a month, it costs about nine dollars. If a customer uses a million hours a month, it costs about $900,000.
Voice recognition is a hot market. “Many new startups are being launched,” Fox said, providing opportunity. “Many interesting new businesses are being built on voice data.”
AssemblyAI’s product can detect sensitive topics such as hate speech and profanity, so customers can save on human content moderation.
Asked to describe what differentiates his technology, Fox said, “We are an experienced team of deep learning researchers,” with experience from companies including BMW, Apple, and Facebook. “We build very large, very accurate deep learning models that have recognition results far more accurate than a traditional machine learning approach. We build really large models using advanced neural network technologies.” He compared the approach to what OpenAI uses to develop its GPT-3 large language model.
In addition, they build AI features on top of the transcriptions, to provide summaries of audio and video content, which can be searched and indexed. “It goes beyond just transcription,” Fox said.
The company currently has 25 employees and expects to double in about four months. Business has been good. “There is an explosion of audio and video data online and customers want to be able to take advantage of it, so we see a lot of demand,” Fox said.
This is an age-old question. Some assert that there is the potential for knowledge that ought to not be known. In other words, there are ideas, concepts, or mental formulations that should we become aware of that knowledge it could be our downfall. The discovery or invention of some new innovation or way of thinking could be unduly dangerous. It would be best to not go there, as it were, and avoid ever landing on such knowledge: forbidden knowledge.
The typical basis for wanting to forbid the discovery or emergence of forbidden knowledge is that the adverse consequences are overwhelming. The end result is so devastating and undercutting that the bad side outweighs the good that could be derived from the knowledge.
It is conceivable that there might be knowledge that is so bad that it has no good possibilities at all. Thus, rather than trying to balance or weigh the good versus the bad, the knowledge has no counterbalancing effects. It is just plain bad.
We are usually faced with the matter of knowledge that has both the good and the bad as to how it might be utilized or employed. This then leads to a dogged debate about whether the bad is so bad that it outweighs the good. On top of this, there is the unrealized bad and the unrealized good, which could be differentiated from the realized bad and the realized good (in essence, the knowledge might be said to be either good or bad, though this is purely conceptual and not put into real-world conditions to attest or become realized as such).
The most familiar reference to forbidden knowledge is likely evoked via the Garden of Eden and the essence of forbidden fruit.
A contemporary down-to-earth example often discussed about forbidden knowledge consists of the atomic bomb. Some suggest that the knowledge devised or invented to ultimately produce a nuclear bomb provides a quite visible and overt exemplar of the problems associated with knowledge. Had the knowledge about being able to attain an atomic bomb never been achieved, there presumably would not be any such device. In debates about the topic, it is feasible to take a resolute position favoring the attainment of an atomic bomb and there are equally counterbalancing contentions sternly disfavoring this attainment.
One perplexing problem about forbidden knowledge encompasses knowing beforehand the kind of knowledge that might end up in the forbidden category. This is a bit of a Catch-22 or circular type of puzzle. You might discover knowledge and then ascertain it ought to be forbidden, but the cat is kind of out of the bag due to the knowledge having been already uncovered or rendered. Oopsie, you should have in advance decided to not go there and therefore have avoided falling into the forbidden knowledge zone.
On a related twist, suppose that we could beforehand declare what type of knowledge is to be averted because it is predetermined as forbidden. Some people might accidentally discover the knowledge, doing so by happenstance, and now they’ve again potentially opened Pandora’s box. Meanwhile, there might be others that, regardless of being instructed to not derive any such stated forbidden knowledge, do so anyway.
This then takes us to a frequently used retort about forbidden knowledge, namely, if you don’t seek the forbidden knowledge there is a chance that someone else will, and you’ll be left in the dust because they got there first. In that preemptive viewpoint, the claim is that it is better to go ahead and forage for the forbidden knowledge and not get caught behind the eight-ball when someone else beats you to the punch.
Round and round we can go.
The main thing that most would agree to is that knowledge is power.
The alluded to power could be devastating and destroy others, possibly even leading to the self-destruction of the wielder of the knowledge. Yet there is also the potential for knowledge to be advantageous and save humanity from other ills.
Maybe we ought to say that knowledge is powerful. Despite that perhaps obvious proclamation, we might also add that knowledge can decay and gradually become outdated or less potent. Furthermore, since we are immersing ourselves herein into the cauldron of the love-it or hate-it knowledge conundrum, knowledge can be known and yet undervalued, perhaps only becoming valuable at a later time and in a different light.
There is a case to be made that humankind has a seemingly irresistible allure toward more and more knowledge. Some philosophers suggest you are unlikely to be able to bottle up or stop this quest for knowledge. If that’s the manner of how humanity will be, this implies that you must find ways to control or contain knowledge and give up on the belief that we can altogether avoid landing into forbidden knowledge.
There is a relatively new venue prompting a lot of anxious hand wringing pertaining to forbidden knowledge, namely the advent of Artificial Intelligence (AI).
Here’s the rub.
Suppose that we are able to craft AI systems that make use of knowledge about how humans can think. There are two major potential gotchas.
First, the AI systems themselves might end up doing good things, and they also might end up doing bad things. If the bad outweighs the good, maybe we are shooting our own foot by allowing AI to be put into use.
Secondly, perhaps this could be averted entirely by deciding that there is forbidden knowledge about how humans think, and we ought to not discover or reveal those mental mechanisms. It is the classic stepwise logic that step A axiomatically leads to step B. We won’t need to worry about AI systems (step B), if we never allow the achievement of step A (figuring out how humans think and then imparting that into computers), since the attainment of AI would presumably not arise.
In any case, there is inarguably a growing concern about AI.
Plenty of efforts are underway to promulgate a semblance of AI Ethics, meaning that those developers and indeed all stakeholders that are conceiving of, building, and putting into use an AI system needs to consider the ethical aspects of their efforts. AI systems have been unveiled and placed into use replete with all sorts of notable concerns, including incorporating unsavory biases and other problems.
All told, one bold and somewhat stark argument is that the pursuit of AI is being underpinned or stoked by the discovery and then exploitation of forbidden knowledge.
Be aware that many would scoff at this allegation.
There are those deeply immersed in the field of AI who would laugh that there is anything in the entirety of AI to date that constitutes potential forbidden knowledge. The technology and technological elements are relatively ho-hum, they would argue. You would be hard-pressed to pinpoint what AI-related knowledge that is already known comes anywhere near the ballpark of forbidden knowledge.
For those that concur with that posture, there is the reply that it might be future knowledge that we have not yet attained that is the upcoming forbidden kind, and for which we are heading pell-mell down that path. Thus, they would concede that we haven’t arrived at forbidden knowledge at this juncture, but this is an insidious distractor due to the aspect that it masks or belies our qualms entailing the possibility that it lays in wait at the next turn.
One area where AI is being actively used is to create Autonomous Vehicles (AVs).
We are gradually seeing the emergence of self-driving cars and can expect self-driving trucks, self-driving motorcycles, self-driving drones, self-driving planes, self-driving ships, self-driving submersibles, etc.
Today’s conventional cars are eventually going to give way to the advent of AI-based, true self-driving cars. Self-driving cars are driven via an AI driving system. There isn’t a need for a human driver at the wheel, and nor is there a provision for a human to drive the vehicle.
Here’s an intriguing question that has arisen: Might the crafting of AI-based true self-driving cars take us into the realm of discovering forbidden knowledge, and if so, what should be done about this?
Before jumping into the details, I’d like to clarify what is meant when referring to true self-driving cars.
As a clarification, true self-driving cars are ones where the AI drives the car entirely on its own and there isn’t any human assistance during the driving task.
These driverless vehicles are considered Level 4 and Level 5, while a car that requires a human driver to co-share the driving effort is usually considered at Level 2 or Level 3. The cars that co-share the driving task are described as being semi-autonomous, and typically contain a variety of automated add-on’s that are referred to as ADAS (Advanced Driver-Assistance Systems).
There is not yet a true self-driving car at Level 5, which we don’t yet even know if this will be possible to achieve, and nor how long it will take to get there.
Meanwhile, the Level 4 efforts are gradually trying to get some traction by undergoing very narrow and selective public roadway trials, though there is controversy over whether this testing should be allowed per se (we are all life-or-death guinea pigs in an experiment taking place on our highways and byways, some contend).
Since semi-autonomous cars require a human driver, the adoption of those types of cars won’t be markedly different from driving conventional vehicles, so there’s not much new per se to cover about them on this topic (though, as you’ll see in a moment, the points next made are generally applicable).
For semi-autonomous cars, it is important that the public needs to be forewarned about a disturbing aspect that’s been arising lately, namely that despite those human drivers that keep posting videos of themselves falling asleep at the wheel of a Level 2 or Level 3 car, we all need to avoid being misled into believing that the driver can take away their attention from the driving task while driving a semi-autonomous car.
You are the responsible party for the driving actions of the vehicle, regardless of how much automation might be tossed into a Level 2 or Level 3.
For Level 4 and Level 5 true self-driving vehicles, there won’t be a human driver involved in the driving task.
All occupants will be passengers.
The AI is doing the driving.
One aspect to immediately discuss entails the fact that the AI involved in today’s AI driving systems is not sentient. In other words, the AI is altogether a collective of computer-based programming and algorithms, and most assuredly not able to reason in the same manner that humans can.
Why this added emphasis about the AI not being sentient?
Because I want to underscore that when discussing the role of the AI driving system, I am not ascribing human qualities to the AI. Please be aware that there is an ongoing and dangerous tendency these days to anthropomorphize AI. In essence, people are assigning human-like sentience to today’s AI, despite the undeniable and inarguable fact that no such AI exists as yet.
With that clarification, you can envision that the AI driving system won’t natively somehow “know” about the facets of driving. Driving and all that it entails will need to be programmed as part of the hardware and software of the self-driving car.
Let’s dive into the myriad of aspects that come to play on this topic.
The crux here is whether there is forbidden knowledge lurking within the existing and ongoing efforts to achieve AI-based true self-driving cars. We’ll begin by considering the status of the existent efforts and then shift into speculation about the future of such efforts.
Per the earlier discussion about whether there is forbidden knowledge that has already perchance been revealed or discovered via the efforts toward today’s AI systems all told, the odds seem stacked against such a notion at this time, and likewise the same could be said about the pursuit of self-driving cars. Essentially, there doesn’t seem to be any forbidden knowledge per se that has been discovered or revealed during the self-driving cars development journey so far, at least with respect to the conventional wisdom about what forbidden knowledge might entail.
One could try to argue that it is premature to reach such a conclusion and that we might, later on, realize that forbidden knowledge was indeed uncovered or invented, and we just didn’t realize it. That is a rabbit hole that we’ll not go down for now, though you are welcome to keep that presumption at hand if so desired.
That covers the present, and ergo we can turn our attention to the future.
Generally, the efforts underway today have been primarily aimed at achieving Level 4, and the hope is that someday we will go beyond Level 4 and attain Level 5. To get to a robust Level 4, most would likely say that we can continue the existing approaches.
Not everyone would agree with that assumption. Some believe that we will get stymied within Level 4. Furthermore, the inability to produce a robust Level 4 will ostensibly preclude us from being able to attain Level 5. There is a contingent that suggests we need to start over and set aside the existing AI approaches, which otherwise are taking us down a dead-end or blind alley. An entirely new way of devising AI for autonomous vehicles is needed, they would vehemently argue.
There is also a contingent that asserts the Level 4 itself is a type of dead-end. In brief, those proponents would say that we will achieve a robust Level 4, though this will do little good towards attaining Level 5. Once again, their view is similar to the preceding remark that we will need to come up with some radically new understandings about AI and the nature of cognitive acumen in order to get self-driving cars into the Level 5 realm.
Aha, it is within that scope of having to dramatically revisit and revamp what AI is and how we can advance significantly in the pursuit of AI that the forbidden knowledge question can reside. In theory, perhaps the only means of attaining Level 5 will be to strike upon some knowledge that we do not yet know and that for which bodes for falling within the realm of forbidden knowledge.
To some, this seems farfetched.
They would emphatically ask; just what kind of knowledge are you even talking about?
Here’s their logic. Humans are able to drive cars. Humans do not seem to need or possess forbidden knowledge as it relates to the act of driving a car. Therefore, it seems ridiculous on the face of things to claim or contend that the only means to get AI-based true self-driving cars, for which they would be driven on an equal basis as human drivers can drive, would require the discovery or invention of whatever might be construed as forbidden knowledge.
Seems like pretty ironclad logic.
The retort is that humans have common-sense reasoning. With common-sense reasoning, we seem to know all sorts of things about the world around us. When we drive a car, we intrinsically make use of our common-sense reasoning. We take for granted that we do have a common-sense reasoning capacity, and similarly, we take for granted that it integrally comes to the fore when driving a car.
Attempts to create AI that can exhibit the equivalent of human common-sense reasoning have made ostensibly modest or some would say minimal progress (to clarify, those pursuing this line of inquiry are to be lauded, it’s just that no earth-shattering breakthroughs seem to have been reached and none seem on the immediate horizon). Yes, there are some quite fascinating and exciting efforts underway, but when you measure those against the everyday common-sense reasoning of humans, there is no comparison. They are night and day. If this were a contest, the humans win hands down, no doubt about it, and the AI experimental efforts encompassing common-sense reasoning are mere playthings in contrast.
You might have gleaned where this line of thought is headed.
The belief by some is that until we crack open the enigma of common-sense reasoning, there is little chance of achieving a Level 5, and perhaps also this will hold back the Level 4 too. It could be that a secret ingredient of sorts for autonomous vehicles is the need to figure out and include common-sense reasoning into AI-based driving and piloting systems.
If you buy into that logic, the added assertion is that maybe within the confines of how common-sense reasoning takes place is a semblance of forbidden knowledge. On the surface, you would certainly assume that if we knew entirely how common-sense reasoning works, there would not appear to be any cause for alarm or concern. The act of employing common-sense reasoning does not seem to necessarily embody forbidden knowledge.
The twist is that perhaps the underlying cognitive means that gives rise to the advent of common-sense reasoning is where there is forbidden knowledge. Some deep-rooted elements in the nature of human thought and how we form common sense and undertake common-sense reasoning are possibly a type of knowledge that will be shown as crucial and a forbidden knowledge formulation.
Wow, that’s quite a bit of pondering, contemplation, and (some would say) wild thinking.
Maybe so, but it is a consideration that some would wish that we gave at least some credence toward and devoted attention to. There is the angst that we might find ourselves by happenstance stumbling into forbidden knowledge on these voracious self-driving cars quests.
For however you might emphasize that having AI-based true self-driving cars will be a potential blessing, proffering mobility-for-all and leading to reducing the number of car crash-related fatalities, there is a sneaking suspicion that it will not be all-good. The catch or trap could be that there is some kind of forbidden knowledge that will get brought to the eye and we will inevitably kick ourselves that we didn’t see it coming.
The next time you are munching on a delicious apple, give some thought to whether self-driving cars might be forbidden fruit.
We are on the path to taking a big bite, and we’ll have to see where that takes us.