Nicole Simonek

View Original

Breaking Down “AI” and “Generative AI”

Exploring the real meaning behind the buzzwords “AI” and “Generative AI”

In today’s world, I find that the buzzwords “AI” and “Generative AI” are thrown around in the media, by leaders, and in day-to-day conversations with little to no real understanding of the meaning. Even when reading up on excerpts about different companies and start-ups, these buzzwords find their way into the main subject of product offerings. And of course, some people like to proclaim these buzzwords in conversations to appear tech-savvy or include them in taglines about their occupation.

This is all fine by me, as long as the purpose is to keep things high-level for an audience, and that person or company has a deep understanding of what it is. However, I would bet that a sizeable percentage has little to absolutely no clue. Many times, it appears to be a competition of copycat — how fast will people adopt it in their organizations just because someone else is doing it. In addition, by repeatedly using these vague buzzwords, it can attribute to fear-mongering. My agenda is not to bash on anyone, but to dissect these words into smaller parts so that anyone is able to make better assumptions and ask questions.

This also falls in line with my philosophy about “imposter syndrome”. There are many ways to approach it, but if you “fake it” for too long with no real desire to truly understand something, it can be very detrimental for yourself and others. There is a difference between having enough qualifications with some doubts about deserving to be somewhere versus being pompous about an ingenuine title or status with no intentions of ever closing the gap — I am talking about the latter, not the first. The key is to learn and ask questions about what you do not know, as I would bet most people had the same questions or also skipped out on gaining a true understanding.

Drawing a parallel to this blog’s discussion, I think these buzzwords blew up so fast that a lot of people did not stop for a second to truly understand it before hopping onto the “AI” rocket ship. This is not anyone’s fault, as tech has a reputation for appearing like jumbled jargon with less people who are likely to create a fully inclusive environment for a discussion. Fear not, my readers — we are in this together and I too have so much more to learn.

Let’s start by breaking down “AI”. We may know that it stands for Artificial Intelligence, but what else? If we were to google an image of “AI”, often times a personified image of a robot would pop up. But what even is it? “AI” encapsulates several things and is an umbrella term. It includes machine learning and deep learning, and is considered as an entire field. The specific model types include computer vision, neural networks, natural language processing, reinforcement learning, classification, and much more. When we picture “AI” it should look more like code rather than a giant army of robots taking over the world.

So if “AI” consists of machine learning and deep learning, what are they? Let’s start with the definitions of machine learning and deep learning. At a very high level, machine learning ingests data, trains an algorithm or model type on that same data, then outputs a prediction of values. As you can see, having the right input data is crucial to the prediction power of the model! Without representative training data of a population, huge mistakes occur such as the controversies over the years about biased models on race and gender. These models find patterns and repeated behavior in the data given.

What is the difference between machine learning and deep learning? Deep learning is a subset of machine learning and utilizes something called “neural networks”. It imitates data ingestion as a human brain would with an input layer, hidden layer(s), then an output layer. But it still 1) ingests data, 2) performs a model/algorithm, and 3) outputs predicted data.

I want to spend some time on the meaning behind computer vision and natural language processing in particular (these are popular “AI” model types — and considered the most “scary”). Computer vision models are not “seeing” with robotic alien eyeballs. A computer vision model will ingest the data from images and videos, and included will be a label stating what it is — for example “dog” or “cat”. The images and videos are resized, normalized, broken down into tiny pixels, and converted to numerical data. From there, the model detects different parts of the picture such as colors, objects, and lines and uses these data patterns to determine if it is a “dog” or “cat”. The model may for example output a 0 for “dog” and 1 for “cat”. This is all extremely high level, but the point is that the model is using data, not eyeballs.

Similarly, natural language processing is not a “talking hyperpolyglot alien”. You may think of Alexa, Siri, or various chatbots. Again, at a very high level it ingests data, parses sentences and words, counts the frequency of words, slices words up, removes common words, reduces words to root forms, marks the type of speech (verb, adjective, etc.), assigns probabilities to a certain word belonging to a specific topic, and more. The breaking down, annotating, and labeling the smaller parts of the dataset is key to the algorithm performing well.

So now you may be wondering, where did “Generative AI” come from and what makes it different from “AI”? When thinking of “Generative AI”, one would normally think of ChatGPT which has rapidly grown in popularity — this year especially. But what even is it? It’s a deep learning model with an aesthetic User Interface (UI) on top of it! I emphasize the User Interface, as that is the reason it is so well known today — the model behind it has existed for several years. The User Interface is the star of the show, and the main reason that anyone can prompt it with no coding or technology background required.

And if we break down the word “Generative”, it is also as it implies — it generates new data based on the enormous amount of data it was trained on. On the modeling front, a popular buzzword is the Large Language Model (LLM). Broken down, it really implies its definition as it requires a “large” (and I mean, gigantic!) amount of data, uses many different types of natural “language” processing (what we just reviewed), and is a deep learning “model”! LLM’s are considered a subset of the larger natural language processing field. Another buzzword used in “Generative AI” is a foundation model, which LLM’s are a category of. Foundation models are the building blocks of LLM’s, and again ingest tons of data to produce an output.

When I think about the urgency that many leaders have around “AI” and “Generative AI” today, I am often reminded of the mistake people may make when interviewing for an analyst, engineer, or scientist role. When posed a question on a business challenge, many may use an out-of-the-box answer and say, “I would create X model”. But this completely misses the point, as there are many other considerations. Is a full blown machine learning or deep learning model actually needed, or are there quicker and/or cheaper alternatives to achieve the same result? Why that specific model and not another one? Is it actually solving the business problem at hand? Will there be a Return on Investment (ROI)? And is there enough accurate data to even train a model? Don’t get me wrong, these evolved models are going to definitely revolutionize our world, but they do not need to be used for everything — at least not right now.

So the next time that someone throws one of these buzzwords at you, follow up by asking what specific model type is being used and why, what the business purpose is for investing a large amount of capital into it, what tradeoffs were considered, or how they collect and audit such a large amount of training data. Or ask them about their own perspective on these topics, as this one is from my own personal lense.

This blog merely scratches the surface, but I hope it answers some previous unknowns, provides some comfort to those who did not know where to start, and helps people of all types of backgrounds join in on the conversation. As for many things, sometimes you just need to keep breaking down the whole into smaller and smaller parts in order to gain an understanding of what it truly is!


Interested in working with me? Schedule here!


If you enjoyed this blog post, I would greatly appreciate you taking a moment to browse my other blog posts (I write on lifestyle, beauty, travel, restaurants, working in tech, and cocktails + wine), subscribe, and/or make a donation. Donation proceeds go toward monthly Squarespace fees, PO box fees, website enhancements, ad campaigns, SEO tools, and time investment in addition to my full-time job. Thank you for your readership from the bottom of my heart! xx Nicole

See this form in the original post