The Upcoming Role of AI Is Everywhere
There's a not-so-quiet revolution happening across the entire field of computing. To put it in context, I have to take you on a trip down memory lane. When I first started in the computing industry in the mid-1980s (1983-86), AI was already a big deal for research and attempted development.
There were specialized development tools and environments (Clisp, based on the list-processing programming language, LISP) and even specialized list-processing hardware platforms (the Xerox Dolphin) in fairly wide use on some pretty ambitious development programs underway at the time.
How ambitious? Consider Doug Lenat's CYC (short for encyclopedia, which was conceived as a "comprehensive ontology and knowledge base that spans the basic concepts about how the world works" to quote from the relevant Wikipedia article). And indeed AI did have some successes at the time — most notably DEC's XCON/XSEL AI-driven VAX configurator used between 1980 and 1986.
The true blue starry-eyed romance, however, never quite set it. AI failed to take the world by storm, as its proponents and practitioners at the time had hoped.
First Fits and Starts, Then a Sustained Explosion
The field of AI continued to evolve in fits and starts until the turn of the century. The focus of the field also changed from seeking to encapsulate and represent human knowledge to a data-driven approach that depended on acquiring, grooming, and analyzing huge volumes of data. From this changed approach, we not only get Machine Learning, we also get Big Data and Data Science.
Today, all of these things are busily transforming the way that computing works, how data gets used, and ultimately how people understand what the data has to tell them. As a full-time technology writer, I work with companies of many kinds to help them explain their products, processes, and vision for how AI-derived technologies work.
Over the past decade, AI and machine learning (ML) have crept steadily and increasingly into that word stream, gaining increasing visibility and performance. ML, it seems, is good at recognizing patterns, especially those that reveal inter-relationships among things that are often too complex for human brains to notice without AI-driven assistance.
And in fact, AI is proving itself to be extremely important across a broad range of technologies, including information security, human shopping behavior, genomics, autonomous vehicles, fraud detection, and more. If some useful application for AI/ML isn't already in the works, in most areas of computing, then you can be pretty sure that somebody is thinking about and experimenting with ways to put it to work where it isn't at work already.
Get Ready For AI in Workplace Communication and Activity
Earlier this month (February 2021), Microsoft announced Viva, an employee experience program designed to "bring � together communications, knowledge, learning, resources and insights." Simply put, Viva leverages the way people use Microsoft Office (including and especially Teams).
Viva puts AI to work analyzing patterns of communication, activity, and usage, to help guide workers to additional information and training (the afore-quoted "knowledge, learning, resources and insights") in all kinds of interesting ways. Among other things, it can guide individuals to learning materials and technical resources to help provide input (and ideally, clarity) on technical topics of interest and value.
But it can also watch how individuals use their technology tools (including the Office Suite and the Windows 10 OS) and suggest more useful, powerful and productive ways to get their jobs done.
As a Windows Insider MVP since 2018, I'm keenly interested in understanding how to manage, maintain, and get the best use out of the OS platforms and applications at my disposal. I see this as the tip of what could be an enormous influx of added knowledge and skill for people who use computers to get their work done (as so many of us do nowadays).
I don't mean this exposition as an endorsement of Microsoft platforms and technologies, per se. It's more that I see what they're trying as a grand and noble experiment that potentially shows how AI is reaching outside of data analysis for particular business purposes, creation of value and competitive advantage, and for focused research into specific technical or physical subject matters.
Viva shows what AI/ML might be able to do — and the outcome is still to be determined, and the experience lived through — as it gets applied to general-use computing, productivity, and office work.
I see this as an amazing opportunity with enormous potential. If it goes horribly wrong, it could turn into an invasion of privacy like no other we've seen so far. If it goes mostly right (and I'm hopeful it will) it could make people not just better workers, but also, better human beings.
I'm also hopeful it could help shield everyday technology users from most of the hairy details involved in tweaking, tuning, and problem-solving that IT professionals revel in, but others dread. Whatever happens, it promises to change man-machine interaction forever. Let's hope it changes it for the better, and brings a better experience to all of us.