As I wrote yesterday, I’ve been enthusiastic about my recent experiences with Artificial Intelligence (AI). I know AI works well as a research assistant, because yesterday AI discovered many sources of information I had not found after months of searching on my own.
And the AI results not only returned new pieces of information but was even more impressive at combining concepts to reveal new relationships among them, often providing a new path of information and concepts to explore.
But I also have grave concerns about AI.
My experiences with computers are relevant. When I was a student at Scattergood Friends School (late 1960’s), the University of Iowa (about fifteen miles away) donated time on their mainframe computers for use by students in nearby schools. My math teacher suggested I read a book about the FORTRAN computer language, which is how I began to learn, i.e. taught myself to program computers. In those days, prior to the availability of classes in computer science, most of us were self-taught. I went on to teach myself about six computer languages. A computer programmer has to be a life-long learner, because computer languages and systems were changing very rapidly. I usually spent about 20 percent of my time at work downloading and learning evolving information. It wasn’t always a smooth process. I spent about six months trying to learn the PROLOG language, and never mastered it.
During the summer prior to my Senior year at Scattergood, Don Laughlin (an Iowa Quaker and mentor) arranged for me to spend the summer at the University of Iowa Hospitals working with him in his medical electronics lab. The lab had just purchased one of the first commercially available desktop computers and wanted me to write the software to automate the process of calculating results from pulmonary function testing, which took a couple of hours, each, to do manually.
This was at a time when laboratories were transitioning from using oscilloscopes to measure and display data, to using computers, instead. The many advantages included real time display of signals (from babies breathing, in my case), the opening and closing of various valves to regulate the testing proedure, immediate calculation and display of complex results, and automatic storage of all the data in databases. But it immediately became apparent that developing such software was a complex and very long-term process. The last software system I worked on, which measured how well gases in a baby’s lungs diffused between the alveoli in the lungs, and the capillary blood, took three years to finish. That involved measuring eight channels of data (flow, volume, gas concentrations), reading gas concentrations in thousandths of a percent, monitoring and displaying the signals in real time, and making decisions, in real time, of when to turn a specific valve on or off. But as a result, ours is the only lab in the world that can do those measurements.
I also spent a lot of time writing “teaching programs”, using computers to display information, and giving and grading “tests” to students about the material. I especially liked “simulations”, such as a mathematical model that allowed students to make adjustments to how the respiratory system was functioning, for example, and to display the results of those changes.
This is on the “other side” of machine learning, which is a large part of AI. AI can sort through huge amounts of data to discover patterns, i.e. “learn” about the data.
My ultimate concern is AI might be used by computer systems to learn what humans don’t know, and use their ability to control devices, including movement, to somehow overcome humans in certain situations.
Prior to that, now actually, there are many concerns about how “well” AI is working in many instances. This would not be the time to allow AI to make decisions for you.
I thought I would see what AI had to say about cautions related to artificial intelligence. What follows was completely written by the AI system I’m using, copy.ai
NOTE: The following was completely generated by copy.ai from a list of keywords I entered related to caution about artificial intelligence.
Caution about artificial intelligence
Artificial intelligence (AI) is the use of computers to imitate human intelligence. AI can be used for many purposes, including making recommendations about what products to buy or which advertisements to view. It has become a hot topic in business and politics but also raises concerns about its potential dangers. This post will address some common questions about artificial intelligence such as:
Artificial intelligence will never be perfect.
AI is not perfect yet, but it can be improved.
AI is still a work in progress, and there’s still plenty of room for improvement. The best way to know if your AI system is up to par is by testing it against other models—that way you’ll know if it’s performing well or not. If you’re in doubt about the accuracy of your model (or how well it’s working), consider running some tests on it!
AI can make mistakes.
AI can make mistakes. It’s not perfect, and it should be treated with the same respect as any other human being.
AI can make mistakes in the same way that humans do: by making assumptions about their environment and then acting on those assumptions without much thought or consideration of consequences. For example, if your computer tells you “There’s a dog at the door,” this isn’t necessarily accurate—sometimes dogs are just dogs and don’t have a door (or even an owner). On top of that, computers aren’t really good at perceiving subtle differences between different kinds of animals like dogs; if you don’t want your house cat getting into mischief because someone told you there was a big black dog outside waiting for her to come home from school with cookies from Dunkin’ Donuts…well…then maybe we should stop using AI altogether until we figure out how best to train these programs?
What exactly do we mean when we say that a computer program is “learning”?
When you’re talking about artificial intelligence, the term “learning” is used a lot. But what exactly does it mean?
First of all, learning is a process that occurs over time. It involves changing the way you do things and/or think about things. Learning also involves changing your view of everything around you—your environment and its contents are part of this change as well (e.g., when someone learns how to drive). Finally, learning means changing from one state or stage in life to another stage or state—for example: from childhood into adulthood; from being single into getting married; etcetera
Can you teach a computer to learn?
When you learn something new, are you teaching the information to yourself?
- The answer is yes. When you’re learning something new, your brain has to store that information in order for it to be useful later on. This process of storing and retrieving information is called memory and memory storage is what makes up human intelligence.
So can computers learn? Well… sorta… but not exactly like us humans do! A computer learns by reproducing patterns from previous experiences (e.g., if they’ve seen an object before, they’ll recognize its shape). It doesn’t necessarily understand why these patterns exist — just what they do when combined with other objects or situations (for example: how many squares are needed around a triangle?). That said, computers have been programmed with algorithms designed specifically for teaching machines how best to learn tasks like memorizing lists of words or playing chess against humans using only their neural networks as processors instead of traditional CPUs/GPUs (which require lots more energy).
When it comes to the use of AI, there is always a trade off between accuracy and speed.
When it comes to the use of AI, there is always a trade off between accuracy and speed. The speed of artificial intelligence is limited by the speed of your computer, which can be improved with faster processors and more RAM (Random Access Memory). The accuracy of artificial intelligence is limited by the accuracy of your data, which can be improved with more advanced statistical methods.
Although both these factors play important roles in determining how fast or accurate an algorithm will be at identifying objects in photographs, they don’t necessarily have to be mutually exclusive—you may have both high-quality images as well as good computational power available when building an algorithm for this task!
We need to think about how to continue developing and improving artificial intelligence, but also recognize its limitations
We need to think about how to continue developing and improving artificial intelligence, but also recognize its limitations.
The development of artificial intelligence is a complicated process that involves many different fields. The field of computer science has developed new ways of thinking about the problem, but there are still many unanswered questions. For example, what does it mean for an artificially intelligent system to have “intelligence”? What would it mean if we created an artificial being with no emotions or empathy?
We must continue developing and improving our understanding of these issues so that we can make informed decisions about how best to use AIs in our daily lives – as well as what they’re capable of doing in general terms!
We now know that artificial intelligence is still in its infancy, but it’s a fascinating topic to explore. There are many potential applications of AI today, including self-driving cars and robots capable of performing complex tasks. While AI can make our lives easier by performing certain tasks more efficiently, we should be careful not to forget its limitations—and the need for human oversight when it comes to making decisions about our own safety or well-being.
Neonatal Intensive Care Unit, circa 1980. Permission was obtained for patient photo for a publication about Riley Hospital for Children.
One thought on “AI revisited”