Updated: Sep 12
I remember when my grandfather got his first iPhone. He, a well-read student of history, proudly showed it off, declaring “I am not a Luddite.” The Luddites were early 19th-century English textile workers that destroyed new, productivity-enhancing machinery. Opposition to emerging technologies is not new to the human experience. With the emergence of AI and large scale language models like ChatGPT, we can expect some to pushback.
Users of ChatGPT quickly notice how sensitive results are to prompt phrasing. Nevertheless, there are uses. It can help proofread, brainstorm, or source information. Too often, though, results leave much to be desired. Yes, AI passed the bar exam. I even gave the final exam for predictive analytics course I teach to ChatGPT. It earned a C, and I was being generous. Try it yourself, go to ChatGPT and ask it “I have a 10 gallon bucket and 5 gallon bucket. How do I measure 5 gallons?” Point is, it can lack common sense.
ChatGPT is a tool that rapidly learns from the entire internet. Like humans, AI learns from practice, from trial and error, and from training. AI is only as good as the information its trained on, and the processes driving recommendations are opaque to users. Moreover, AI is not accountable to its decisions and recommendations. Currently, much of the AI we encounter is designed to establish relationships between particular words and make predictions. Think autocorrect and autocomplete when texting. How often do our phones autocorrect to the wrong word?
Like any emerging technology, upfront costs are substantial. Standing up an AI requires teams of highly skilled, highly educated, and expensive engineers. There are impactful smaller scale benefits to be sure, but the initial investment can be prohibitive. Moreover, the firms deploying AI see much more benefit in identifying fraudulent charges, making purchase recommendations, or ad targeting. For now, AI focuses on the tactical and automatable.
It seems there is some rich, low-hanging AI fruit for senior housing providers. The Journal of Active Aging recently published a piece that provides a great overview of ChatGPT and its uses in senior housing. Senior Housing News describes how providers use AI and machine-learning to support tasks like scheduling, budgeting, medication management, and health outcomes tracking. While the question of scale remains, it is important that senior living organizations gain familiarity with these technologies. Machine-learning - a type of AI - offers rich opportunities when integrated with existing softwares and platforms because it quickly sifts through mounds of data to make predictions. Many programs already have imbedded analytics and algorithms supporting decision-making. That said, it is crucial to track the quality of predictions and make corrections were necessary.
All told, AI will never replace human decision-making so long as human’s are accountable for the decisions made. The Luddites were wrong. Technology is a tool. The human selects the right tool for the job. Our knee jerk, apocalyptic reactions to innovation stem from our desire to maintain the status quo. This makes for good entertainment (e.g., The Terminator, The Matrix, etc.) but not good planning.
Dan Lindberg is the founder and principal of Applied Economic Insight LLC, which enables municipalities, developers, owners, and operators to enliven residential, senior housing, and healthcare real estate. He has a graduate degree in economics, teaches courses in business analytics at Marquette University, and his article “The price elasticity of senior housing demand: is it a necessity or a luxury?” published in Business Economics won the 2022 Contributed Paper Award with the National Association for Business Economics. His firm is part of the network of healthcare consultants, Stackpole and Associates.