AI is great, but it’s only as good as its programmers!

AI only as good as its programming

AI (Artificial Intelligence)

It goes without saying (but we’ll write it anyway) that the booming Chat GPT discussion has given us a lot of opportunities to get out there and talk AI (including Alanna). We’re grateful that our Co-Founder and President, Hoyt Mann, was asked to discuss AI in several forums, including ALTA’s fantastic ALTA Insights webinar, Rick Grant’s Strategic Targeting podcast, October Research’s highly regarded NS3 and, most recently, the RESPRO RISE Fall Seminar. People are, justifiably, awestruck with the potential of AI and the impact it could have not only on our industry or the greater economy, but the way we go about everything on a daily basis.

However, while AI, in general, is absolutely something to behold, let’s also keep in mind that we’re nowhere near Terminator/Skynet status yet. Not by a long shot. Its potential is vast, but the sky is not necessarily the limit for a few key reasons.

AI is rule-driven

AI, at its core, is a collection of algorithms and programmed instructions designed to perform precisely defined tasks. It presents information or acts based upon that programming. Chat GPT provides a great example. The way you ask it a question will impact (sometimes significantly) the way it responds and what information it gives you. But it will still be very consistent about what information it shares if you are consistent in the way you ask your question.

Programmers have long used the term GIGO to explain this concept. “Garbage in. Garbage out.” Flawed or nonsensical data produces nonsensical output. GIGO not only applies to programming but also to human decision-making. Computers don’t—CAN’T—think for themselves and will produce wrong answers if given incorrect information. In the ChatGPT world, a high level of trust is provided without thoughtful fact-checking and source verification.

AI does not have common sense, and it is not capable of independent thought or reasoning. Can it “learn?” Yes, but in a way far different from that of humans.

The human mind, on the other hand, does not process or operate on the basis of algorithms. While AI may mimic, the human mind can conceive and create. Robert Marks provides another good example—biting into a lemon. “No software engineer will ever capture [that experience] in algorithmic form.”

AI has no soul

We like to say that AI has a huge brain, but no soul. It’s not the human brain, nor does it do most of the things the human brain can do. It functions as programmed—replete with any inherent biases or exclusions in that programming. AI is limited by the data used to “train” its programming. Entrepreneur provided a great example of this. “An AI system that has been trained on a dataset that only includes pictures of white cats might not be able to accurately identify a black cat. This is because the system has not been exposed to enough examples of black cats to learn how to identify them.”

The truth is that AI could, in time, theoretically become a serious threat. But it will never be more intelligent than the human mind.

Having said that, AI continues to evolve. That will bring some complications with it, including ethical considerations, more complex fraud and much more. We’re already seeing just how much conversational AI, like Alanna, can improve a title agent’s operation. Contact us today and let us show you just how much Alanna could streamline your processes.