How Vector Search Empowers Generative AI

One of the biggest advancements in the last few years, and one that has helped power the generative AI revolution we’re seeing, is vector embedding. In short, vector embedding is the process of transcribing input context into a mathematical representation that we can perform computations against. We can take advantage of this technology to make our retrieval-augmented generation apps even more powerful.

Natural Language Query Processing: Part 2, How To Get It

This is a continuation of the previous post, NLQP Part 1: What Do They Want?, where we discuss how to go from a natural language question (something like “What was my average revenue in the last six months of 2023?”) to an answer, using generative AI. In this part, we’ll discuss how to use the question classification we generated in Part 1 to query and respond to the question using retrieval-augmented generation.

Natural Language Query Processing: Part 1, What Do They Want?

One of the first work projects I did with AI in any capacity was a natural language query processor. This seems to be the “de facto” use-case for AI in a business setting, being able to query data using a question similar to what you’d ask your Google Assistant or type into a chat dialog. Microsoft in particular has been investing in this space for years, with their Teams Co-Pilots and Microsoft Fabric products.

Poetry in Motion - My Generative AI eInk Display Clock

One of the first projects I ever did with generative AI (outside of just playing around with ChatGPT of course) was creating an eInk display clock that shows a generated poem. This was modeled after a popular kickstarter project, the Poem/1. It was my first interaction with the OpenAI API, eInk displays, and a raspberry pi pico w.

Neural Networks, Machine Learning, AI, and You! - Part 3

The last chunk of slides from a presentation I gave at my workplace concerning generative AI. In this section I talk about how generative AI fits into the general definition of “intelligence”, as well as the issues to be aware of when working with generative AI including dataset bias, hallucinations, guardrails and more. I’ve also included a few different ways people can get started integrating with generative AI, including the popular Retrieval-Augmented Generation method. This is part three of the three part presentation. Part 1 is here, and Part 2 is here. As a reminder, a lot of this content was based on material I had found on Jay Alammar’s site and youtube channel - you can check it out at https://jalammar.github.io/ Also worth noting that there’s quite a bit of stuff I’d update on these slides now that I’m nearly a year into working with generative AI, but as a primer it still works pretty well.