
Grand Slam Journey
This podcast discusses various topics around - sports, business, technology, mindset, health, fitness, and tips for growth. Topics range from what sports have taught us and how we transitioned from a singular focus and pursuit of our athletic goals and dreams to the decision to end our sports careers and move into the next phase of our lives. My guests share how they found their passion and purpose, tips for maximizing potential - holistically - physically and mentally, how they transitioned from one chapter of their lives to the next, and how to drive success in sport, business, technology, and personal life.
Grand Slam Journey
38. Polly Allen: Hands-On with AI - Prototyping, Learning, and Revolutionizing the Industry
What happens when you combine a passion for technology and a dedication to creating meaningful AI careers? You get Polly Allen, a professional business manager and leader with over 20 years of experience in high tech. In this engaging episode, I chatted with Polly her journey from a remote part of Canada to leading the team that launched the very first generative AI answers on Alexa. Discover how Polly's upbringing and early exposure to computers sparked her lifelong love for technology and led her to become a prominent figure in the AI industry.
During this episode Polly and I dive deep into the exciting world of AI and its implications, from cybersecurity and fraud concerns to the importance of risk review and governance in AI projects. Uncover how AI is developing to become more sophisticated and the potential for open source projects and no-code tools to make powerful systems more accessible to everyone. Plus, we touch on the challenges and opportunities of bringing more women into computer science and the tech industry.
Listen in as we discuss the importance of getting hands-on experience with AI projects and the tools available to build generative AI prototypes. Learn the best ways to stay up-to-date with AI advancements and be inspired to explore the possibilities of AI in your own career and life. Don't miss this insightful conversation full of valuable advice from a true AI enthusiast and expert — Polly Allen.
Polly's website with current and upcoming courses and get your free AI Learning Guide: https://www.aicareerboost.com/
Sing up for Polly's newsletter: https://www.aicareerboost.com/interested
______________
Partnerships:
- Noble Cold Plunge: https://www.noblehormetics.com/product/sisu-cold-plunge/ Get $100 discount with code: GSJ100
______________
8 EIGHT SLEEPSave $200 on 8Sleep and get better quality and deeper sleep with automatic temperature adjustment
LEORÊVER COMPRESSION AND ACTIVEWEAR
Get 10% off Loerêver Balanced Compression and Activewear to elevate your confidence and performance
Disclaimer: This post contains affiliate links. If you make a purchase, I may receive a commission at no extra cost to you.
This content is also available in a video version on YouTube.
If you enjoyed this episode, please share it with someone who may enjoy it as well, and consider leaving a review on Apple Podcasts or Spotify. You can also submit your feedback directly on my website.
Follow @GrandSlamJourney on Instagram, Facebook, Twitter, and join the LinkedIn community.
Polly, thank you so much for accepting my podcast invitation and chatting. All things AI Welcome, and I wanted to dive into AI, but also a little bit into your journey towards AI. I think it's really fascinating given all that has been happening in the world lately, and so, before we dive to many interesting topics, i wanted to give you a chance to introduce yourself to our listeners.
Polly:So my name is Polly Allen. I help professional business managers and leaders get into AI careers or help get trained to best support AI software teams, building software, leveraging AI. I've over 20 years experience in high tech, as well as experience at Alexa AI. I led the team that launched the very first generative AI answers on Alexa in 2020. And so it's been a wild ride coming to a point where I'm helping others follow in my footsteps. Really exciting time to be involved in the space.
Klara:Yes, and actually if you could describe a little bit your journey of what made you interested, even in technology, i'm sure there's a lot of people listening curious to hear your career path, as the AI is becoming such a big thing, and so people may be thinking how do I get closer to this new technology that either may replace my job or may advance my job? I'm sure there's a lot of questions going on now. So what is your upbringing like And how did you get?
Polly:to the AI world. I trace the line to my interest in technology right back to my upbringing. So I grew up in a fairly remote place. I grew up in the Yukon territory up in Canada. It's funny when you speak to people outside Canada about they're like, oh, another place in Canada, but if I tell Canadians I'm from the Yukon they're like whoa, there's not very many of us. It's a really sparsely populated part of the country. It is very cold in the winter north of Alaska, but it does have a little more funding per capita for students. So when I was growing up in elementary school there we had more access to computers in our classroom than many folks would have had access to And we also had a principal of our school who was a woman, started a programming club when I was in elementary school where we could learn to do really basic programming with local writer And I really loved it right away.
Polly:I really loved the idea that I could give a set of instructions that it would carry out and playing around trying to be creative with what I could make computers do. Right away I knew I was interested in it. So it was something that I pursued through junior, high and high school computer camps etc. And eventually I decided to pursue an undergrad degree in computer science. It was interesting when I started my undergrad at the University of Victoria I hadn't realized at all that computers at that time really was still very much a predominantly male field, and so I remember having these feelings of I don't belong here. I'm not going to figure this out, everyone knows more than me. But luckily I had that background of having worked with it before. that really helped me feel a lot more confidence that I might have felt otherwise. I kept getting good grades So I stayed, even though I felt someone like an outsider, certainly at the beginning. But yeah, i knew I liked programming. I really enjoyed the problem solving of it all, even when you sometimes you want to bang your head against the keyboard trying to figure something out. So I had done my master's, my undergrad and my master's in computer science And even back as far as my master's degree, i was looking at knowledge based systems and how people are looking to represent knowledge the same way our brains do and do reasoning.
Polly:So in the field of AI at the time, which looked very different it does today I hadn't been working in AI since then really in depth until more recently. So I worked as a software engineer and eventually a software manager, for 10 years at least. my heart had always really been, though, and how people interact, people, systems. why are we building things? do people understand why we're building things? And so I knew a better fit was going to be in product management. So I did my MBA at the University of Vancouver, ubc, and and moved into the product management space where we're deciding why we're building something, working with the engineers and trying to support them. The best description I've seen of product management is like Well, you're really the janitor, basically like see what can go wrong, clean up any messes, pave the path for people and kind of lead the team from the side, a little bit from behind, into what they're building. So that's sort of what led me there. that passion from a young age really helped me stay the course. That's interesting.
Klara:Can you recollect a specific person who influenced you? or that passion was a just, you kind of explain it like you discovered it and sort of knew it fits. Was there anything more to that, other than that feeling?
Polly:I do remember our principal the name Margot Simono was the one who decided to run kind of an after school program specifically to give children time to, you know kind of play on these computers and try new things. It wasn't part of the curriculum at that point But we did have more access So we could have one computer per child who was interested. It just took someone interested in education, giving us all a chance to play around and see what we could do with it, and that really inspired me to do the same with my students, and I do want to double click also on your later experience.
Klara:I think when you were younger and I know you can validate this feeling we don't see that much perhaps the differences, and so we tend to follow our passion and curiosity.
Klara:But I think in the later years, maybe late teens or early 20s, you start realizing oh, there aren't that many women in the class, right, that's when you mentioned you had that realization And I think that's still the thing in technology, and obviously in technology telecom is very scarce to see, not women, especially in the leadership circles in the room, and so I think it's a little bit of pipeline thing And I also wonder a little bit of This nature versus nurture, and many people may kind of disagree now, but Jordan Peterson, canadian too, talks about some of the differences between men and women and his theory or findings is that men on average are more interested in technology and women, more average, are more interested in. So maybe science, bad science around humans and the human interaction and speaking with people, then just playing with technology. You seem to have really interesting convergence of the two. What would you want to share from your experience or what is your outlook on this area?
Polly:I personally I don't agree with Jordan Peterson's theories, especially if you look back to the origins of computers and like the first kind of look use of punch cards. At first programming was considered an administrative task like where we're just writing instructions. So let's lead that to secretarial type people. So in the 50s and 60s it was the domain of women. There are a lot of women who are really well known for having come up with like the first protocols for a lot of things in that era. So I just think it's just kind of silly. It happens to be how it is. It's also how these systems have developed and especially how they're taught really caters in a lot of ways using like analogies that are from things that are coded as male in our society and we'll definitely see that we internalize these messages of. This is not for me If every analogy we use is about creating race cars and slot machines or a lot of the examples I saw for sure in my undergrad.
Polly:I'm finding that what's interesting is the places they've had more success in bringing more women into computer science as an undergrad is these areas of combining it with something practical. So I don't know if I agree with that idea of like playing with technology because it's a bright and shiny thing, is a male thing. Before mid 80s the graduates from computer science were pretty much 50-50, like. It was much higher for women. It dropped significantly in the 80s when we started marketing computers to dads and sons as a dad and son thing to do, as an alternative to your car kit or whatever. But through sort of the 2000s some of the schools that have done the most to get women enrolled in these systems are doing things like computers and music or computers and psychology, where it's combined with their passion as an enabler, and then they're like it's very clear, like oh, this is for me, it's part of my passion, let me learn the tools I need to be able to do it. So I think that's a really promising approach to encouraging diversity. Yeah, i agree.
Klara:I think your journey is really interesting. It could be taken as a blueprint, right? If you give everyone an access to it and approach it with your mind and be supportive with it, then it seems like the journey unfolds. If I guess women aren't discouraged in the later years as well.
Polly:Yeah, I agree.
Klara:So let's dive in also what you do now, because I've been following you for at least six months and I'm really interested in registering for one of your classes Now, as I move to Austin in the upcoming few months, once I get my life in order, settled in. But I'll definitely going to sign in for one because I am so curious about AI and all that it has to offer And even though I'm not a programmer, i would love to try to see what I can create with some of the language models. But maybe tell the audience a little bit more about your recent venture, about coaching and mentoring and creating programs centered around AI, and why you decided to start it.
Polly:So I left Amazon in November 2020. I had grown kind of frustrated of, as I was rising in the ranks there, getting promoted to leadership meetings and things like that, just how completely male dominated that had to come. I had seen computer science improving over the years since I started programming back in the early 2000s, and then I was like this is weird, are we back in 2003? Like what has happened? And it definitely felt like it was becoming more and more impossible to ignore this kind of grinding feeling I was getting from being the only woman in the room over and over again And kind of having perspectives on things that were sometimes brushed aside or ignored. That I felt like having that diversity of thought in the room, too, is really important as well. So I knew that there was a space to bring more folks into AI careers and not necessarily focusing on the pipeline problem. I think that's a problem as well, but what I see happening a lot is a lot of technical gatekeeping, even around business functions in AI and organization or leadership functions, where it's kind of assumed that to lead these teams you need to be the PhD or at least an experienced technologist developer specifically, you need to know the code and you'd have a computer science degree, some kind of technical credentials. I had done a bunch of technical courses to get into the AI space And I was just really surprised as how a product leader, how little I really needed that There was a course set of language, of course, to have a common language with your team, but I was just frustrated by how much of that sort of gatekeeping around a computer science degree was happening. So AI Career Boost was designed to help folks who are experienced business leaders, product managers, marketing managers in that space, who want to learn more about AI and be able to lead those kinds of initiatives. So we started with a couple of different courses. So the very first course I launched was with another amazing leader, rupa Chattervedi, who's design leader We met at Amazon. She then worked at Google. She now works at Uber, senior Design Manager at Uber. She's also taught at Stanford Design School And she has this depth of experience specifically in conversational design for Alexa.
Polly:With this explosion of interest in conversational AI, we decided to put together a course to really help people understand how it gets built and what their options are from the old school way, from last November, and then we started using these tools to just using an LLM and seeing what you can do Like. What are the tradeoffs? if you really want to use that system long term, when should you use it for a prototype And how can you create mockups where you can actually iterate on the design considerations of these systems? A lot of folks have only begun to start to think about how the user interface design or voice systems is drastically different than it is for a voice or even chat systems. Right, and how can you create a voice system? that's just a point and click of your mouse. So decisions like how long should the answers be or how chatty should it be, should it try to keep the conversation going, what's its personality These are the kinds of considerations we want to help people build in from the beginning. So our course is a week long. We have another cohort starting soon.
Polly:I've also launched a more in-depth course that's not just about conversational AI. We have concepts in the world of AI to the business sphere And we do have, of course, a focus on generative AI, because that's so important these days. But it really helps business leaders walk through from requirements how is this different in an AI project than from a traditional software project through to working with the model development. What does that process look like? How can I help and support a team? What kind of information can I bring to them? What's my role if I'm in a product leadership position there, all the way through to considerations to you can look around corners around.
Polly:When I get this to production. What else needs to happen? What should I be keeping an eye out for? So it's definitely what I've gleaned through my experience working and putting these AI products into production. As part of that, everyone in the class is developing a generative AI prototype of a system. I do think the future of product management is that product managers, instead of bringing requirements documents, will actually bring a prototype where it's really easy now to prototype any kind of software simply by asking an LLM to behave like that piece of software and prompting it correctly. So we're really working on hey, how far can we push this with no code tools and see what we can build for them to be able to create a vision and rally their team around it and get a really clear specification of what the tool is supposed to do. So it's been very exciting that classes in progress right now.
Klara:Yes, and I love the advancement, especially the no code tools, and some of the things that this new language learning models, in my view, allow, and perhaps with the next advancements may allow, is people like me who don't know and have programming.
Klara:I mean, actually now it's sort of ironic is when I took the job at Apple which I can't talk about much about my Apple experience on this podcast, but one of the things that I'm really gotten closest the amazing things you can create with an app, and so I know a number of different languages, but I do not know the coding or programming language, and it's really interesting all the things you can create or how you can impact somebody's life with creating a different app or a program, and so I hope this will actually allow more people or people like me to play with things and have slightly different access to technology or even just experiment and explore. Maybe before we dive a little bit deeper into these areas, i attended one of your webinars, which was fantastic, and you were so great and explaining even just the basics to me that I thought people may want to know, because there's a lot of lingo about the AI or some generalizations going on. Give us a little bit of physics AI 101, which I don't know if it exists, yeah, but what?
Polly:would you want?
Klara:people to know, as they think about generative AI or AI in general, and how and where to start.
Polly:Wow, it's a great question. It's a big question. I'd start by explaining that the term AI it's used a lot these days. It's become overused to the point that it doesn't mean anything much anymore. Technically, ai has been around for a very long time since the 50s. So what we are talking about today is very different from what people were excited about back then.
Polly:Currently, what people are most excited about is kind of a new way of communicating with computers. So instead of telling computers exactly what to do, step by step as a set of rules, we're able to give them a set of examples. Kind of like when you're teaching toddlers colors. you point at something and say green or blue or yellow. You're providing it with what, in machine learning terms, is called a labeled set of data. Here's what I'm talking about, here's what the label is for it And by giving a person enough examples, they eventually come up with an explanation for what is green, what is blue, what is purple. So machine learning using neural networks is an extremely, extremely simplified form of that happening. It's not nearly as complex as what's going on in our brains, but we are able to use examples And the machine kind of derives the pattern. Hey, what is the pattern that explains these examples the best. Then when you give it a new input, it can use those same patterns to determine the output. And that's the basis of the large language models that are formed today, where it has kind of a map of how have all the words I've seen a large corpus of data, how they fit together, what are the patterns? So, based on this pattern, what's the next logical word, given the words that you've given me? And there's been much more complex work to train them, using different techniques to make them sound smarter, than just a simple autocomplete predicting the next word. But at its core, that's really what's going on.
Polly:So I think people sometimes lose track in the hype around AI. It is broken down to very basic mathematical functions And we are still at a stage where even these extremely large language models, with the billions of parameters we've yet to see them do really well at tasks like combining, say, image generation with text generation. So if you go to Dali 2 and try to generate me a Christmas card or something, the words will come out all funny. So I think of it as like you have a speech center and you have an image center In our brain. They're just interconnected and part of the bigger machine. We're still just recreating tiny bits of human capability. Of course, it's moving really quickly And it's exciting to see things like multimodal AI models coming out where they can do some combination of processing images as input, text as output But we're still very, very early days and a long way from, i think, from having issues around the sort of science fiction scenarios of AI taking over our lives and this existential threat.
Polly:There are obviously some very educated people who disagree with me on this, but I have found, for a large part, the more people have spent time actually using these models, the more they understand that they're still quite random in what they produce. You can get them to produce scary things, and you can get them to produce unscary things as well. I think it's our human nature to revert to science fiction stories when we don't understand something right. It's like well, i do understand the Terminator from the SkyNet, right, and so we're looking for analogies.
Klara:Yes, the darkness of the chaos or things that something is unforeseen or changed, i think, mostly spins the human mind into the fear and some of the worst case scenarios. But, given what you described, i think there was a really great introduction. I wonder how much risk there is And I'm tying it a little bit into the diversity of even the basics of the labeling, because if we look at the model, there might be quite a bit of judgment needed to apply even some of the basic logic. And even earlier, with Google Assistant or Amazon Alexa, there's been a bunch of theories whether they are sexist or racist and they understand different accents or languages. Now, with something like my accent, sometimes I have to tell Trevor to tell the device to tell them to do a command, because it is obviously not tuned on all different kinds of accents. That's a bit of one.
Klara:But you even just the risk of, with the diversity in mind, who's labeling some of these words and how we perceive them, because we do know that the human mind is very different And so having diversity even early on in the product development and the language model can be really important to set the correct foundation for the next evolution. Would you agree? or how do you look at it And what would be your invitation towards this early phase of development to really set a good foundation?
Polly:and base Yeah, 100%. So the amount of labeled data that's needed for these models is often getting underestimated. It's estimated that OpenAI spent something like 40,000 hours labeling data. Now they, and many of the other large tech players, do that by outsourcing it to cheaper labor. In the case of OpenAI, i know they received some negative attention in the press for hiring data labelers out of Africa for less than $2 a day, so it is something that we're looking at like oh, this just takes human basic judgment, maybe reading capability. Who are the cheapest humans we can get to do this?
Polly:The power in this case is really deciding on what are the guidelines that those annotators are given, and that's really interesting, because it's definitely not the sexiest part of machine learning in any organization, but those guidelines are so incredibly important And that's where you'll end up with a lot of the biases depending on how those labels work of whether a model is considered too woke or not woke enough or has issues with toxicity and things like that.
Polly:We definitely had an extensive set of policies, like a content policy that was established at Alexa, and it can be anything from bias and toxicity to things like will you allow statements that compare to products and looks like you're giving shopping advice, when really that isn't the role of an assistant, right, and so that role is often performed a conjunction of the data science team and product managers.
Polly:Again, product managers sort of serve as that hub to get everyone on the same page about, well, what is our policy on this? Where there's a gray line of like, is this a biased statement or is this an untrue statement? Right, this is really where you get to be the arbiter of truth for a model. So I think it's interesting that some of these biggest systems right now there's no transparency into the guidelines that have been used in many cases, and that's going to be one of the biggest things that helps you really understand, like, what is the ethos of this model, what are the ethos of the people who directed its development And how the data is going to be skewed one way or another from your perspective.
Klara:Yeah, and building on that and maybe a little bit of the fear you mentioned, you don't see it immediately, right, that's going to be way long period of time until the AI conscious brain can outsmart us human in a different level and the dark doom scenarios take over the humanity and you get rid of humans potentially. but there's been a lot of that circulating, even the top leaders in the industry and technology, such as Elon Musk, or even now Sam Altman, who testified in front of Congress highlighting some of the fears and concerns. How do you look at it, pauli, since this is your field, i'm looking at it from the outside end, but I'm curious of the expert opinion.
Polly:I am much more concerned about the clear and present dangers we have in the current technology And I think as much as there should be effort around looking at what are the guardrails and what is especially the transparency we're going to demand of the companies that are creating these systems, the bigger worry for me right now I feel like we have enough worries to deal with So just the risk of mass production of misinformation. that is very large and that those are the kind of dangers that Sam Altman did highlight in some of his testimony as well. I think we have to focus on misinformation fraud. We're already seeing a rise in things like people posting fake job descriptions online. You can easily make a job description, look a lot like other job descriptions from the same company and post it And then, once people apply, say congratulations, you got the job. please send us $300 so we can send you a laptop for your new job to start on Monday.
Polly:That's one example of the kinds of frauds that are out there. They were always possible, but they've just now been able to. a single person can do them a scale really quickly And even things like social engineering getting people to share passwords. there's a really big concern that the current systems can write exploits, code exploits to hack around security really quickly and easily. So all of these concerns seem like real, clear and present dangers that we can identify that I think do need urgent attention. So I'm not saying it's not without risks for sure, right, i find the whole science fiction at taking over the world is a bit of a distraction to these sort of imminent issues that we need to address right now.
Klara:Yes, I love even just the things you highlighted. Now, though imminently Funny, one of the things that made me think about now, as I move, I'm trying to get rid of this stuff that I don't need. So moving is always a great opportunity to clean up your life and that posts on Facebook marketplace, But now recently it happened two times within two weeks. Somebody sent me a message that they're interested and they wanted my email and a Venmo account to make sure that they're set up when they come pick up the thing. But then they send me an email about transfer of Venmo money And I need to deposit money in order to get the money. It's just so weird. It's a total spam.
Klara:But my bigger thing and I wonder if this happened to you too, or most people even through COVID, the amount of spam calls increased so drastically. So it seems like this hacking and cybersecurity importance really took on a different level, at least from my observation. The past two years. I've gone so far that I actually use a spam filtering app. Somebody who's not in my context is just going to block, because I got tired of taking calls that are in whatever language I can't understand, And so this can be a really big thing. Even to build on one point Somebody like the older generation, and specifically my mom, she's so trusting that. Somebody who's trying to call her from Russia, speaking to her in Russian, and she only knows like quarter of Russian. Now she knew back then, She's oh sure. I'll give you my bank account information as a mom.
Klara:No, this is a spam 101. Don't do this. But there's so much of this going on And so even the acceleration of that. somebody like me almost got to spam with this Venmo because it looks like a real message. If I don't look carefully, it looks so authentic. So how do we help? Or is it really just the cybersecurity and stricter security rules and conditions we have to look into?
Polly:I think there's sort of a top down and a bottom up approach. So bottom up is how do we educate people like your mom? It's almost like that whole area of having to educate about news. Where is your news coming from and getting savvy from, like digital information? I feel like that needs to get to turned up to an 11 to keep people from getting taken in for things like this, and we all just need to be a lot more suspicious.
Polly:So one of the things that has just become cheap is information. We were kind of in an information economy before, but knowledge and trust for the information and people that are able to guide us through that, that has become really valuable. So finding we are going to trust the brands we know and love, to not feed us false news narratives, not make up photos using deep fakes and that's going to be part of their brand value is proving that they're not doing these things. This article was written by a human. This photo was really taken in order to distinguish themselves from just the noise.
Polly:I think that's going to emerge, but that bottom-up education of people to become more suspicious of the information they see and consume is going to be part of it. And then there's top down, so that investment in companies that are looking to counter these security threats. That's where this frantic feeling of a race, i think, is coming from. We were like, why do we keep the good guys ahead of the bad guys as quickly as possible? I really like that. Sam Altman has said in his discussion of policy and regulation that ultimately, like the best case scenarios, we have regulations that helps the good guys get ahead and keeps the bad guys from getting ahead. I don't know that anyone has a clear line of sight on what that looks like, but that's a nice way to think about the goal of that kind of regulation, right, that it's got to be something that strikes the right balance.
Klara:Yes, and it makes me wonder, though, about the capability of the regulators. Who's creating this and how? Because, if you've heard even any of the congressional hearings, our politicians have no idea about the simplest things of technology, and so that concern that somebody like that is not trying to craft a good regulation around this is almost more scary than trying to incentivize the positive things, and even the other side of it is. I'll compare it to sports, and sports it's somewhat clear, although it's becoming more, not that clear There are gray lines sometimes of doping. The point is, the people who want to dope will always find a way around and figure out how to compete while taking illegal substances And so taking this into AI.
Klara:If you really want to create something poorly, how can we really stop someone? It's just, i guess, hoping is what you mentioned, that the positives of AI, or perhaps using the new advancements in that technology, will actually outspeed the bad examples and perhaps use them in the right way to actually stop the spamming or filtering or stealing of identity, fraud, et cetera. Exactly Any other tips you would want to highlight, polly?
Polly:I mean, I think that's the major piece. I really like that analogy with sport doping, though that it is like well, how far do we hold back innocent people and make their lives miserable in order to catch 100% of all people who are doing evil things? The hard thing is one person now can have a lot of impact with these tools right, And that's really what has changed. They were always going to do nefarious things, but they can just do it more quickly at a much broader scale. And then, just when we're thinking about the web, what if that information ecosystem in general just becomes something you can never trust because half of it is made up right, And so that really is, I think, a clearer threat that I definitely worry about, where we no longer trust the information systems around us. That kind of has a societal risk that is pretty large.
Klara:Yes, That's definitely huge And we've somewhat seen that even with social networks during the pandemic with COVID or word messages you filter in what you leave there, because some would argue that doctors couldn't really speak up about some of the benefits or treatments because they were too filtered through different words. So I guess the thing about it who is to decide what is right or bad? That's also kind of the difficult part of things.
Polly:And how easy it is to plant dissent. It scared me to see recently Sam Altman on his interview with Lex Friedman. Have been asked this So what is true, sam Altman? And he said oh well, there's some things that we know are pretty high on the truthiness scale, like math, but what was it? He was like the origin of COVID is debated. That's not as true as math.
Polly:But it seems like if you're going to say, well, is there dissent out there about this is now the new measure of whether something is true, then anyone with a vested interest can go in and raise enough of a counter argument for any true fact And all of a sudden now something is in dissent. So I guess that it comes down to people are going to be really choosing who they trust through this and who they trust to be that arbitrary truth for them, and it'll be interesting to see if we have one or two big choices in terms of foundation models that emerge or if this is something that everyone will have their own and we have to discern for ourselves which ones we think are the way it wants for us. They're very different worlds in those two different cases.
Klara:Yeah, And one thing I've also learned, though that these language models are really, though, so spread out. The problem with regulating is that anybody can, in some ways, now start using it, And so how do you regulate something that's the thing is out of the bag And now it's taken on its own life, And so how do you start containing it back when it's now sort of readily accessible? But maybe switching to the positives, because I think we've covered a lot of the darkness topics- and there's so much optimism still, and curiosity in this area.
Klara:What do you see, polly, even from your courses, or some of the most amazing prototypes that you have seen people create, that's taken advancements in technology to the next level?
Polly:Totally. I'm super excited I see new things that are coming out all the time. One of the exciting applications of multimodal models that I saw recently I thought was really cool is a project called Be My Eyes. So they're an app that people who are low visibility, low eyesight they have used traditionally just crowdsourcing So they can carry around their phone and say like, hey, what button do I press on this vending machine if I want a Coke Or what's down these stairs? It's safe to go down here and get an answer, either by typing or by voice, usually to help them. And that idea that, oh, we don't need to rely on an army of crowdsourcers volunteering their time. Now we can use a model that can actually be your eyes for you. I thought that was just such a cool application that really helps people solve a big problem right away.
Polly:And then, in terms of what the possibilities I mean, i do think in general it won't necessarily all be the large language models, like the other advances in AI that don't get as much sort of visibility these days, are doing things like helping automate diagnosis of cancer early identifying root causes for this, identifying new treatments and pharmaceuticals.
Polly:There's just really crazy advances every day that aren't even making the headlines because they're so dominated by other things going on.
Polly:And then there's the whole range of what we can do with large language models and what people are finding they can do with it. So it's interesting because I feel we're still at the stage where it's very easy to create a cool demo but actually making something that's super robust, especially if you wanted to replace software like a traditionally developed software system with an LLM. I think the jury's still out. I think they were still really early on those systems. But people are starting to develop these open source projects like Lang chain, where people can control several kind of calls to an LLM, control what they're doing with output and pass it to another tool, and seeing what people are building with that is really super exciting, especially because there's some no code tools kind of being developed in tandem so that people are no longer have to learn the nitty gritty of coding to be able to get really powerful systems up and running. That part's exciting, but still very much like an emerging field.
Klara:Where would you suggest people to start? Because my view on this technology again going back to the fear of this will replace us and we're going to be doing the lower paying jobs and AI will do the smart jobs For us. I agree with you that we're probably way too far from that, but there are some interesting things happening. My view is that we're becoming more and more connected and in line with technology, in a way that it's just becoming part of our everyday job or day, whether we want it or not, and I do think that this AI and advancements will create a new set of opportunities and even potentially, jobs that we can't even envision yet. But I think we should start, really get connected to it, get to learn more about it, keep up with the pace of the innovation. At least, that's my view and opinion. How do you suggest them to start? if they're on this journey? What is the basic step? you recommend people to do.
Polly:The first step is really just getting familiar with the tool, like trying to use the tool in everyday life. There's a thousand and one. Buy my prompt pack or take my course on prompt engineering, become a prompt master. I've always been skeptical because I was always like, well, the point was that it's supposed to be natural language, so there are, of course, tips around giving it the context of what you're doing and why, who you want it to act as, who it's doing this for, so that can definitely help make the answers less generic and more useful, and even giving it examples of hey, if I say this, i want you to say that can really help.
Polly:That's one of the really cool features of these large language models is that with only a very few examples, they can learn a new task. So that part is the part that I think has driven the most fear, but also the most awe with the power. What's possible. Yeah, i don't think you need to buy a prompt engineering course, but I think it is about trying out ChatGPT. Start with the free version and just seeing if there's something in your day to day. Don't want to chat to you. You can do this. Give it a try, and that's certainly how I learned the most with that tool in particular in the beginning was just getting a feel for it And you quickly get an idea of where it falls down and where it does a good job.
Klara:Yeah, i'll put you on a spot falling out chatGPT or Bard.
Polly:You know what I have briefly tried, Bard. I didn't really enjoy it as much, And I think this is interesting. I've done some deep dives myself and RootBuy, just privately on our own, about what is the difference. It really comes down to more of a UX difference.
Polly:I think in terms of the slowly typing out of the word in front of you. It makes you feel like you're very powerful and controlling a HAL 2000 computer kind of thing I tried out. Bard, i have to admit I haven't tried out the latest model. I need to get on that. But yeah, i've just gotten used to chatGPT at this point.
Polly:We've been using it a lot for our classes and things too I do for when we're building. Encourage students to use tools where you can compare different models output, because I think, depending on your use case, different models may be a better fit depending on what you're trying to do with them. So again, do experiment with all of them if you have time, If you only have time for one. I think chatGPT is still in the lead in terms of capabilities, particularly if you get access to chatGPT+, so that costs $20 a month, but then you can access GPT4, which is their latest model And that has a lot fewer problems that we have with these models. Like it making up facts and making up references, it still does this and it still has problems with logical reasoning, but it does it less often.
Klara:Yeah, i do have to admit. The new version of Bard is quite interesting in a way that it's connected to the full internet which is the power of Google, and so chatGPT runs on the old set of data. maybe for your class the model creates that difference, but I've been recently playing with the new version of Bard on Google and it's amazing how it can pretty much index everything from the open VAP. So that's really interesting, the change they've made.
Polly:Totally What I had used. I do use Microsoft's Edge browser that allows you to do basically that use the chatGPT models on the most recent web data in a similar way. I'll have to check that out. They've recently introduced if again, if you pay for chatGPT+, they have a plugin in chatGPT where you can now say browse the web And if you're looking at for anything more recent, it will also do the web browsing piece as well, empowered by Bing, so you can access that right within chatGPT. I'll have to check it out again for you, bard.
Klara:I checked that, bard, but I didn't do the chatGPT, so it's always fun to compare the two and what answers they give. I'm looking at some of my questions here. I think we've gone through almost everything Polly, but actually this one is chatGPT generated question. How can we ensure that generative AI is used responsibly and ethically, And what steps can be taken to mitigate potential risks or negative consequences?
Polly:That's a really great one. My encouragement to my students we spend a lot of time talking about risk management and mitigation is a much bigger part of a business leader or PM's job in AI projects. They're just inherently more risky. They aren't deterministic answers right, it's not behaving by a set of rules that we can understand necessarily, and so I always encourage folks to start very, very early, starting with kind of a 360 degree review of everyone the system will impact. This was inspired by a book that was actually written back in 2017, called Weapons of Math Destruction by Cathy O'Neill. It's the best name for a book on risk in AI. I mean, she won. It's a great name.
Polly:We're done naming books on risk in AI Weapons of Math Destruction and she was a data scientist who had worked on several projects where unintended consequences that weren't foreseen, like bias against particular groups, really had large-scale negative impact. She had advocated early on doing this review of who is touching or impacted in any way by the system. This can start, of course, with like of our users. Are there any particular subgroups that we want to actually make sure we are not negatively impacting? but even people like the annotators who are labeling the data, let's make sure that they're not negatively impacted, as some have been, with real mental health issues arising out of hours and hours of having to label extremely toxic data or sexually explicit data. Even things like whose data are we using as this? Are they negatively impacted? This could have avoided lawsuits for Dali2 and OpenAI. If they've had a register of there is a risk that we're going to negatively impact the very people whose data we're using to train the model.
Polly:That 360 degree review early on it could be lightweight exercise. It can just take a couple hours of your team's brainstorming. This doesn't necessarily have to be this big heavy weight. Let's follow this 42-page process from the UN on being responsible. I want to make sure people know that being responsible and ethical in the development isn't totally the opposite of being agile, identifying the potential risks of who you might impact and then consistently having a process to manage the risks as they come up over the course of the project. That is really one of the key roles that I think needs to be more solidified in these projects as someone who can handle all the emerging risks that people come up and become fearful of and then take a real pragmatic approach of assessing like how realistic is this, how bad is it that happens? Let's prioritize these, let's assign, making sure we have point people for the most high priority risks and coming up with mitigation plans. In that way you can be ready for some of these things.
Polly:There's always going to be some unexpected things that come up in these projects, but having the systems in place really, really helps. On Alexa, we had a bunch of take-down systems. The minute that we heard of any offensive answer or things that didn't meet our guidelines, we had a system where someone would get a support ticket right away to instantly go in and take that answer out of Alexa and make sure she didn't continue responding in that way. We had teams monitoring social media teams monitoring our own customer service calls, just to be sure. We knew we weren't always going to be 100 percent sure of exactly what Alexa was going to say and that there were going to be times where what we were putting out there wasn't 100 percent within the guidelines. Having those processes in place was the responsible thing to do.
Klara:Yeah, that's a great idea. Now, as you mentioned, i do remember you talking about it even in the webinar that I joined, assuming that's part of your teaching and program curriculum, polly.
Polly:Yeah, exactly for sure. I think it's one of the areas that's going to be just growing quickly as a whole area. It's idea of keeping up with risk and governance within AI, along with what are the regulations in the space Is there ever evolving? I think all of that is a big area for growth.
Klara:This one is from Bard, although you a little bit answered it earlier, but maybe we can look at the macro view now instead of just your program. What do you see are the most exciting developments in AI right now and why?
Polly:Hmm, i am most excited to really dive into what's possible with these multimodal models. Again, the demo that OpenAI did of taking a photo of a web app mock-up on a napkin and actually creating a running website Really really cool. My predilection is to be excited about the building of software systems because that's my background, but I think it's really cool that this may be a chance for people who traditionally would never even think that they could build a program to actually be able to Right. So I feel like that's another path to those diversity barriers coming down and having more people involved in creating really powerful systems. I think it can be a really good thing.
Klara:Yes, and I'm actually really curious about that, and I know you have a program on just that, and I'm curious how I'm going to succeed or fail, or maybe a little bit of both, because I have zero programming experience At Apple. I've only taken the Swift. That's for kids. That's the proficiency. I have to just understand a little bit of basics around code and coding in general. But maybe tell us a little bit more about your program that you have upcoming, because I'm personally interested and want to just time it right with my own plan that I have for the next two, three months. What's upcoming for you and the team? And then, what are the things that listeners may be looking out for when it comes to AI education and programs that you create and run?
Polly:For sure. Yes, To start off. So I have a free guide to learning your AI learning path on my website, aicoreerboostcom slash learn AI, And that really is designed to help you understand. Like hey, given where I'm at in my learning journey and my technical proficiency, where do I start Right? The AI really can be understandable for anyone, but there's so much information out there right now and so much of it is aimed at a very technical user that it can be hard to find those resources that are really made for people who don't program. So I've got a list of a curated list of some of the best kind of short courses and videos out there that I recommend for people who are just looking to understand the basics of how do these systems work in the first place, right, And how to get more familiar with them. But even in that same guide, I've also got some of the courses listed for people who have a little bit of programming that may be done like a short course on Python, kind of enough to be dangerous if they want to get more hands on with understanding deep learning and more depth and machine learning and even getting a little more hands on with large language models. That's that what that guide is intended for. And then, yeah, as I mentioned, Roopa and I have a cohort of our conversational AI masterclass course coming up on Maven Maven platform. So that's going to be at the end of June. We are taking a break for the summer and we're going to be starting it up again in September. So if you're interested in that week long course kind of an overview on how conversational AI systems are built and the trade-offs you make there, designed and built, please do get in on that course And then I will be launching my flagship program that I'm running right now, Again in August, September.
Polly:So it's called the Complete AI Product Leader Blueprint And the idea is to take you from sort of scoop to nuts of what is it like to lead and manage an AI project, get you the skills to be ready for that, along with being able to even see around some foreigners and understand some pitfalls to avoid. And in order to practice those skills, we're actually building a generative AI prototype that we'll then use to actually evaluate, like how well does this perform? Would I really put this in production? So what would it take to put this into production with a software engineering team and a data science team at my side.
Polly:So that will be relaunching at the end of August, early September coming up And I'm really excited to incorporate some of the feedback we've gotten. And by then, of course, there may be all new tools to start building those generative AI prototypes. There's some great tools out there already and even more emerging week to week. It's been hard to keep up, but it feels like the new capabilities for people exactly in your situation, Clara, who have never coded before, to actually be able to build something that mostly works. It's really cool to see these things come to life.
Klara:Yeah, thank you. I'm really excited. This podcast is voice only, but as you're talking, I'm smiling and it's like my inner self, Oh my God it's so exciting. I would love to try to play with something to see what I can create, or maybe fail completely, but just that experience and getting some of my hands dirty in the AI way as that's really enthusiastic for me, and if I do explain to many folks, we can make a prototype.
Polly:I can't promise it'll be a very good prototype. That's fine. I'm having two quotes. It doesn't matter.
Klara:I feel like you always find out something about yourself, whether it works or doesn't. You test your limits and you get as far as you can or you fail. And there's also learnings through failures Sometimes those are the most valuable ones, so, and so I'll add some of those resources to the episode notes, polly. but for anybody who's interested in learning more, following you, perhaps getting some tips from you, what's the best way to reach you? The?
Polly:best way is probably to follow me on LinkedIn. If you look up Polly Allen on LinkedIn, i do have a newsletter at AIcareerboostcom slash interested And I try to get that out every sort of four to six weeks, but I'm most active right now on LinkedIn.
Klara:So I'll add those links as well, and then maybe just last one to close with. We talked a lot about AI, the positives and negatives, and it's really something that seems to be evolving every day. I'm actually curious even how you could keep up with all the developments yourself or the models Given sort of the reality now we live in. What would you want to invite people to be doing more of or less of when it comes to AI and the technology and advancements?
Polly:Totally Keeping up with the advancements is very hard. I do like a Google subscription to the words machine learning and AI, but limit my time. It could very much take over your life, That's for sure. Same with like there's a lot of folks doing a lot of discussion and experimentation on Reddit in the space and watching some of those discussions happen is exciting but also can kind of follow up your time if you're not careful. And I think in my free AI guide I've included some of the top podcasts that I listen to that are sort of the ones that I use to kind of keep abreast of technology. Not really like the early, early people are just trying things and have a cool demo, but as far as like what is happening and working, especially as it applies to enterprise, I think that's the place where we're seeing some new things just emerging now because it's so new.
Klara:Thank you so much, Paulie. It's been amazing having you on and talking all things AI. Anything else before we close off?
Polly:Just a closing message that AI really is, and should be, for everyone. It needs everyone to really fulfill its potential. It has so much power. So if you even have a slight inkling and are interested in it, i do encourage you to get experimental, get your hands dirty and dive in.
Klara:I agree And I'm curious to get my hands dirty with AI and one of the problems that's coming for our programs. So expect me as an annoying student, Paulie.
Polly:No, as a star student. Be careful, clara, no pressure, excellent. Well, look forward to it. Thank you so much, thank you.