Ever wondered how our reliance on artificial intelligence (AI) is shaping our future or if it’s threatening our job security?
In this episode, Hannah Clark is joined by Ken Hubbell—CEO of Soffos.ai and Senior Consultant at The Pragmatic Futurist—to explore the ethical side of AI, inherent biases, and the need for curated information on AI.
Tune in for a conversation on the future of technology and human potential in this AI age.
Interview Highlights
- The Potential and Benefits of AI [0:49]
- Ken is a firm believer in the potential of generative AI in the workforce. The idea that AI might replace humans in job sectors is not new, but it’s also not entirely accurate.
- Ken emphasized that AI operates based on its programming, suggesting that our concern should be focused more on how we use AI, rather than the technology itself.
- The goal should be to harness AI for positive collaboration between humans and technology.
We should be more afraid of ourselves than we are of the AI. Because at the end of the day, the AI doesn’t do anything that it’s not told to do.
Ken Hubbell
- Managing AI and Human Teams [9:08]
- One of the issues discussed is the struggle younger generations face in managing people due to their lack of experience and heavy reliance on technology.
- It’s crucial to teach respect and mindfulness when using technology, given that AI can reflect our biases. The way we interact with AI can influence the output, underscoring the need for respectful communication.
If you’re being rude or demanding to the AI because of the way it works, it’s going to give you something back that was on that spectrum of a response.
Ken Hubbell
- The Impact of AI on Communication [12:16]
- AI has also profoundly impacted our communication and creativity, particularly during the pandemic.
- The use of AI tools, like chatbots, has become more prevalent, reshaping our remote communication habits.
- While some fear that AI might take over creative fields, the episode argues that AI is merely another tool for creativity, which has actually increased the demand for creative work.
- The Role of AI in Writing [19:18]
- One intriguing aspect is the use of AI in the writing process. Ken shares his experience of using AI to aid in the creation of a book, highlighting the speed and efficiency of AI.
- However, he also addresses the limitations of AI in terms of creativity and stresses the importance of human input in the final product.
- AI and Education [24:46]
- AI also poses ethical considerations, particularly regarding inherent biases. These biases, which are a reflection of human nature, can be filtered and improved upon.
- Education plays a crucial role in addressing these biases. The conversation then shifts to the rapidly developing field of AI and the lack of a definitive source of information on its developments.
- Looking Ahead [30:47]
- As we navigate the future of AI, it’s important to understand that it is a tool meant to aid us, not replace us. AI is a reflection of our biases, creativity, and innovation. Understanding this can help us harness the potential of AI for positive human and tech collaboration, shaping a future where AI is used to enhance our capabilities rather than threaten our job security.
Meet Our Guest
With over 20 years of experience in the EdTech industry, Ken Hubbell is a pragmatic futurist who designs for tomorrow and builds for today. He led the product management team at Soffos.ai, a company that provides low-code AI/ML solutions for developing advanced natural language processing applications that include knowledge management, learning and assessment tools, question-answering, and performance support systems.
If you’re really creative, AI is not a threat to you, it’s just a new medium.
Ken Hubbell
Resources from this episode:
- Subscribe to The Product Manager newsletter
- Connect with Ken Hubbell on LinkedIn
- Check out Soffos.ai and The Pragmatic Futurist
- Read Ken’s comments in our article on what Google DeepMind’s Gemini will mean for products in 2024.
- Check out Ken’s book – There is AI in Team: The Future of Human, Augmented Human, and Non-Human Collaboration
Related articles and podcasts:
Read The Transcript:
We’re trying out transcribing our podcasts using a software program. Please forgive any typos as the bot isn’t correct 100% of the time.
Hannah Clark: A good way to know that something is truly disruptive is when people are terrified of it, and Generative AI is definitely taking us there. But honestly, the fear around AI as a threat to job security, among other things, is not unreasonable. But history tells us that how threatening an innovation seems at the outset isn't always an accurate predictor of how things will shake out.
My guest today is Ken Hubbell. He's a CEO of Soffos.ai, Senior Consultant at The Pragmatic Futurist, and author of There is AI in Team: The Future of Human, Augmented Human, and Non-Human Collaboration. So, you might say he knows a few things about AI. Having had a front row seat to the development of AI products over the last few decades, Ken is not convinced that LLMs are the boogeyman some folks are making it out to be. In fact, he's pretty confident that this is going to level the playing field better than ever before. Let's jump in.
Ken, I'm super excited for this conversation. Thanks so much for joining us.
Ken Hubbell: Hannah, thanks for having me. I look forward to it.
Hannah Clark: Yeah. So we'll start how we always start. I'd love if you could tell us a little bit about your background and how you ended up where you are today.
Ken Hubbell: Okay. So I was lucky when I was younger. I had four amazing teachers in the public school system - Mrs. Strawbridge, Mr. McCoy, Mrs. Fisher, and Ms. Lambert pretty much shaped where I am today. They each saw something different in me and allowed me to build on those skills, that creativity and that innovation and things like that.
Back before, those were like really cool things to have. Mrs. Strawbridge, she let me write and she let me create stories and tell tales and things like that and didn't hold me back. Mr. McCoy is an incredible person. He taught something called the Enhanced Learning Program down in Florida. It was like the Gift and Talent Program, but it was a class where you got to do things that was outside of the normal curriculum.
Including things like learning how to program a computer and when they were brand new, it didn't exist a whole lot of places. And then shooting movies and being able to do archaeology digs and things like that. It was a fascinating time period, but allowed me to know that you could go outside of a box, you weren't constrained.
Mrs. Fisher let me read anything I wanted to, stuff that was adult level reading at the time and it allowed me to explore, biographies and fiction and science fiction and it was just amazing. And Ms. Lambert, she was our drama coach. She allowed me to learn how to do this, to be able to speak in public and get up on stage and not be afraid.
So I took that and then I went to computer camp. I learned how to program and do my first simulations. Later on, after I graduated from high school, I got a chance to go to the military academy here in the States and I made it through plebe year, learn how to program computers and the Machiavellian approach to learning. And then decided I wanted to go into design school and left the academy and went to North Carolina State University school design.
I then went out into the business world and created sets and things like that for television programs down in Florida. Came back up to the North Carolina area, got involved in doing computer animation and simulation for training programs, and then realized I really like that aspect. I like to teach people how to do things, but yet still keep my hands in the code.
So I eventually went and got my graduate degree in instructional technology and that leaked from product design to instructional design was really not that huge. It's design and you realize that the products are just different. What was really fascinating is in 2010, probably 2012, I had the opportunity to get a hold of an early model of the Alexa.
And I realized, this is the future of learning. I was creating things like performance support tools that were hands free and you could ask questions about whatever it is you want to do and it would step you through a process. So I was writing skills for Alexa and realize, Okay, I like AI. Had I gotten the circuit of doing that and at the same time I was recognizing there was the potential for a massive disruption in the talent industry and in the way human beings worked. And so I started speaking at conferences and doing presentations on talent disruption, the impact of robotics and augmentation and AI.
And I met a guy named Nik Kairinos, and he and I founded a company called Soffos, which is, the company we're talking about here. And we started building an educational product, thought that would be the way to go. And then we realized that getting into the educational space with products is pretty dang difficult.
There's lots of things you have to go through. And people were really interested more in the guts of what we were doing and so we started down this path of creating a set of Lego blocks for Generative AI. And we released a product is our platform as a service and then ChatGPT hit. It was actually really timely, but we realized, Okay, we need to upgrade the platform and so we spent the last nine months or so rebuilding the platform.
It is now officially live. So that's really cool. That's version 2.0. So now we're looking at expanding it and adding other things to it, but this is the place to be. And I'm excited to be here.
Hannah Clark: So obviously you've really been in the weeds with AI since the earliest of days, as far as the development of where we're at right now. But of course, there's a bit of a flip side to that. Like right now, we're seeing a lot of panic around what AI's potential is, whether there's a threat to human labor. So what's your take on that? Do you think that people should be as afraid of AI as they are?
Ken Hubbell: So this is my standard answer to this. We should be more afraid of ourselves than we are of the AI. Because at the end of the day, the AI doesn't do anything that's not told to do. So we have the opportunity here to either create something really great and partner with something, and I call it a partnership. The book I wrote covers this whole aspect of, 'It's not us versus them, it's not them replacing us. It's us all working together as the new team of the future and it starts now.'
The future started, a while back, but it really is kicking in now and if we treat it like that, then we can all succeed. And it's not about it replacing us, it's about us working with it to make us better and in many cases, faster, cheaper. So when you think about a small business, for example, small businesses are in a conundrum because oftentimes it's difficult to scale. Because you're talking about hiring somebody and paying them a salary and trying to figure out how that fits within your budget and you're small anyway.
And one of the things that happens there is that they choose oftentimes just not to do it, or they work 24 hour days trying to compensate for the fact that they can't hire those people because he was on the money. Well, now, any business can have their own personal secretary, any business can have their own ideation department, any business can have their own contract writing and proposal writing and grant writing group.
All these things cost incredible amounts of money and time and resources that are now being able to be done in seconds and minutes by people who wouldn't be purchasing them, they wouldn't be hiring somebody anyway so it's not replacing anybody, it's actually enabling small businesses to be competitive at a level they've never been able to be competitive before. And that to me is the leveling that's happening here. I think there are a lot of big corporations that are a little worried right now because small businesses are going to be able to nip at their heels a lot better, might even take out their whole leg.
Because they're able to compete now at a level that they couldn't before. And so this position that these larger companies have had to have teams of people doing stuff, I can now replicate, using our platform. I can literally spin up dozens of people, agents doing work that I need to get done that I would be limited to myself or just a very small staff trying to get done at the same time.
Well, that's pretty freaking amazing. So that's where I think there's a lot of power in this. I think there's a lot of wonderful things that come out of this. But I think that on the flip side of that, if we don't teach kids and people across the spectrum, but kids, especially at an early age, what this tool is, how this relationship is, what this partnership looks like, then we put them in a position where they may not be learning the requisite skills that they need to, that they look at as an easy way out.
Or something along those lines, but they're also not equipped then to compete at the same level as someone whose kids are being cultivated that way. We are early in this game and there's a lot of things to be ironed out. But I think that if we look at this from a positive standpoint, some people say I'm being a little Pollyannaish about it. But if we look at it from a positive standpoint and we put the right safeguards in place, we will all benefit in this.
Hannah Clark: I couldn't agree more. I think that the matter of safeguards is something that we haven't really explored quite enough as a culture. But I also want to talk a little bit more about the aspect of having AI almost as a tool to do your bidding and the requirement for human oversight.
And I think that's something that we don't really talk about as much when we talk about human versus AI, that kind of balance of how much AI, whether it can actually replace a human being. And I think this is something that we've chatted in the past before is there's still a requirement if you're managing a team of human beings or managing a team of AI or LLM, you still need to have the oversight over that output because you're still responsible for it. Right?
Ken Hubbell: Absolutely. Again, to your point, that goes whether you're managing a team of people, whether it's a local team or a global team, or whether it's AI or whether it's both. And that's where, one of the, there was a Forbes article that just came out and it echoed some of the things I've talked about in the past, which is people in the older generations that were used to managing people on large teams of people are actually adapting to this much easier than Gen Z and Gen Alpha.
Gen Z, while they love to play the games and they love to, coach each other and they love to, to spawn up teams to do specific tasks, they're not good at managing people yet. I mean, I think there's still room to cultivate this, but because of some things that have happened between covid and other things, they've been put in a position where they have not had to manage people or manage anybody but themselves.
And so it puts him at a little bit of a disadvantage, not to mention the fact that they've grown up with this technology so they take it for granted. And it's like, oh, yeah, I can do this, but why would I? Or I don't see the value add to it yet. Well, part of this because they didn't have to go through some of the stuff that the rest of us went through.
And the other thing is that they weren't used to it. I mean, there was a day when people had personal secretaries. Now, when those went away, people learned to do things themselves, but now we have an opportunity to get personal secretaries back. Those who remember those days are actually like, yay, good. I don't have to do this myself.
I can just throw out a couple of things and it'll generate the message for me. I still have to read over it just like I would read over what my secretary wrote too. Because if this is going to the president of the company or to a client, I sure as heck don't want my name on it if it's got something in there I don't agree with. And that's just common sense and that brings up the aspect of part of what's happening with these AI especially the chat stuff is it's holding up a mirror to ourselves and it's saying, what is it you said?
What is it you're asking for? Because if you're not describing it in the right way, if you're either consciously or unconsciously putting bias into what you're asking for, you're going to get bias back. If you're being ugly to the AI, if you're being rude to the AI, if you're being demanding to the AI because of the way it works, it's going to give you something back that was on that spectrum of a response.
So here's a neat trick. Try this sometime. Go to the chat. You can go to Soffos chat and use it. I recommend it, it's free. But go out there, ask it something nicely, and then ask it something mean, but the same question, ultimately the same question. You will get two different answers. And so if we teach kids early on, if we teach people early on when they are using this technology to be nice to the machine, you're going to get really great answers. If you teach it to be ugly, the machine, you might not get the answer that you want.
Hannah Clark: I think there's something really to the idea of the timeliness also of all of this like ChatGPT, especially taking off in public. Shortly after the pandemic, when we've now been accustomed and a lot of us, especially in the tech space have been so trained now to communicate remotely when we used to have a lot more context use.
So we actually had an episode previously with Nimrod Priell, he's the CEO of Cord, and he was talking about how there's just some features and products that were just at the right time, even if they were not a new idea, but because of the timeliness of the release, people were just, they already had that prerequisite habit.
So this is an interesting time for AI because we're so much more reliant when we're wanting to use those tools effectively the ability that we have to communicate remotely and clearly through, text mediums.
Ken Hubbell: Yeah. What's fascinating, my son's, he's in his last semester of studying to be an anthropologist and he's been using our tool set from a research standpoint. Not to do the research, but as a subject of the research that he's doing.
And one of the things he's discovered, he said, Dad, he said, I've learned to communicate with people better as a result of communicating with Soffos. And I was like, Well, what do you mean? He said, when you ask Soffo something, if you're not clear, if you're ambiguous, as far as what it is you're asking, it either gives you an answer that you didn't expect, or it asks you to clarify what it is that you're asking for.
So I've gotten better at being more articulate as far as what it is and specific as to what I'm asking of my professors, of my peers as we're working on group projects. He says, I see this as being a great tool for managers to learn how to communicate better with their employees. I mean, how many times have you heard a manager say, well, I asked them to do something and they went and did it and they came back and said, here it is.
And I looked at them and said, that's not what I asked for. And they said, yes, it is. Yeah, we do this because we presume people have the same lens that we're looking at when we talk about a solution or something to be solved, but we leave out the part that is the context or the, that they frame a reference.
And so now, because the machines don't have that perspective at all, sometimes they have no context. You have to be very literate as far as what you're asking for. That to me, I think is actually going to be a good thing. The other thing that's happening is, we're starting to realize talking to real people is actually a pretty cool thing because after a while, the challenges are great, but they do tend to get a little redundant.
And so, if you want to have a long lasting, meaningful conversation with somebody, having it with a real person is probably a pretty good idea.
Hannah Clark: So that gets back to another matter that I wanted to discuss on here, which is this fear, especially in the content creation space of Generative AI coming for creative fields and being able to compete with people who are creating art, content, articles, those kinds of pieces of medium that I think are just inherently better by humans.
But there's a real fear around that for the future of that kind of artistry. So what's your take as somebody who's really in that space?
Ken Hubbell: Okay, so I'm going to put this in a different context. So when I was in the school design, there were those of us who used markers and rapidograph pens and things like that.
And then there were a couple of people in my studio that had airbrush tools and they could airbrush their drawings. And we were always so jealous, it's like, oh wow, you really have got an edge on the rest of us. Well, then I get a hold of the mackintosh that had a 3D sculpting software on it. This was like the first one. I think it was called super 3D or something like that. But long story short, I was all of a sudden able to crank out rendered illustrations of my product designs in the hundreds in the time it was taking the guys with the airbrushes to do one.
And so I put them up on the board during a design review for tool design we were working on. And I had people in the classroom going, that's cheating, you can't do that. That's using the machine to do your work for you. And I'm like, it is no different than the guys using the airbrushes to do what the guys with the markers have to do the long way.
It's just a tool. But it's amazing that those who are used to doing something one way and that being the only way and oh, by the way, they're very good at it because they built skills up for years that when they are challenged with something that now I can describe something and get an output. That either I use as is, or I use it as a foundational piece to then build on.
It's no different than going to a magazine and putting together a mood board. I'm now taking the mood board pieces and making a collage, and then I do a drawing of the collage that's just one piece. Now, there's more human labor involved in that. But the reality is, we've been using Photoshop for almost 20 years now.
And guess what? It's made more demand, not less. Desktop publishing was supposed to wipe out the publishing industry. Guess what? There's more publishing going on than ever. Now it's digital, but it's still publishing. Every generation goes through this. The weavers in France burned down the looms back in the, gosh, it was 1700s, 1600s, because somebody had come up with an automated way to make the looms work.
And everybody thought they'd be out of work. This is not new. Everybody thinks this is new. This is happening fast. It's happening incredibly fast, but it's not new. And guess what? If you're really creative, it's not a threat to you. If you're really creative, it's just a new medium. I don't know if you noticed it, but animation on TV went to paper cutouts again for a little while there.
And then they were doing jelly beans and seeds and stuff like that. There's always another way to do something, and when something gets used a lot, it's great. Everybody's doing morphing for a while, and then it's stopped. Why? Because everybody got bored with it. Okay, people are going to get bored with mid journey.
They're going to get bored with, Dolly. It will become part of the tool sets that people use. And as soon as we get past some of this other stuff that's going on with the IP this and this, that and the other thing. I think everyone, when they relax and realize it's all about creativity, then I think we're going to start seeing some amazing things happen out of it.
Right now we're in this wishy washy stage of we don't quite understand it all. Things are happening fast. The pitchforks are in the air and the torches are being waved, but it's going to calm down and people are going to realize this is yet another tool that will benefit us all as humanity.
Hannah Clark: I agree, and I think that there's a huge misconception with folks who are starting to dabble. For example, my other gig is to edit the Product Manager publication. And there's been a non-zero sum of articles that have been submitted to me as original work that I can just tell inherently that it's been ripped directly off of ChatGPT and sometimes barely edited.
But what's interesting about that is that the work is semantically almost perfect and in a lot of cases makes so much sense, but there's just this inherent feeling that you get when you read it. That it's just not, it doesn't come from a human being. It hasn't come from a human being. I just don't connect with it.
Can you help us understand a little bit why that is?
Ken Hubbell: All right. So I'll give you a perfect example. So in my book, and I will admit, I did use our chat engine to help do some of the writing, not all of it. And it definitely was in the early draft stages as I was trying to work out some of the nuances of the sequencing of information.
But one of the things I had it help me do is, in the preface, I've got a number of scenario stories that I wanted to have written. And I wanted different styles of writing. And some of the styles of writing, I'm just not that good at. So I asked it to help me do it. And the ones that actually made it into the book are the ones I actually had to go back through and re-edit, like, massively.
And between myself and my daughter, who's a writer and an editor, she went through and she's like, Dad, this is crap. So she was ripping it big. But some of it was really good. And one of the things that was interesting is that with the stories, there were patterns of words that it used a lot because guess what, the predictive and the sequencing that generative AI does to craft a story or a narrative or a document period is based upon the fact that it is going out to billions of combinations of words that people have used.
And it's looking for patterns of what is the next logical word someone used when talking about something, whatever that topic is. And so early on in ChatGPT, for example, you actually watched it do this. So we would start writing an answer. It would go, oh, first word, someone normally says when they're talking about why is the sky blue is 'the'. Okay, so Doug is there and then it gets down and, finally get the sky is blue because and then it's going on. Well, one of the interesting things is that it would hit a certain point and it's literally building it word by word or a little tiny phrase or sometimes half word by word, it would get to a point and would go away saying this sentence doesn't make any sense anymore.
So it would scrub back and it would start writing again, and it would do this. And so you would see this thing zigzagging out. Sometimes it would write a whole sentence or two, and then it'd go back to the beginning and rewrite the whole thing. And so you watch this thing zigzagging all over the place, because what it was doing was predicting the next word or next set of words to go into it.
Well, if you ask it to write a story about, people frolicking in a meadow, it might say, On a sunny afternoon, blah, blah, blah. Well, I did this with a couple of different scenarios and it started off the same way three times. I was like, okay, this is not good. On a sunny afternoon or on a cloudy afternoon or on a cloudy day.
And it was like, but it was always basically the same structure. And I was like, okay, I gotta, I've got to somehow get it to do it differently. So I learned how to make the prompts different. And even then I would have to go back through and edit the heck out of it. Well, that's not much different than what we do as human beings.
A good writer that writes a book or even an article for a paper has an editor. So whether you're writing the draft and feeding it into ChatGPT or so forth, and having it do the editing and then give it back to you, and then you go back through it and make the changes that the editing led itself to, and then you put it back in.
But it's a partnership. It's a relationship. What I was doing for a little bit was I was having to outline a couple of things. I said, I need a good outline for this topic. And it would come up with an outline, and then I would take that, write a bunch of stuff, feed it back in, say, what do you think, and fix all my spelling errors, because I suck at spelling.
And then it would come back out, and I would re-read it, and I would write my stuff and then I would hand it to my human editor because I'm like, does this even make sense because we just did this exchange. And that was early on in the book and it worked this whole flow it's a new type of flow in the creative space. Now the amazing thing is this and this is the key takeaway of this whole thing, the edits and the work that the computer does takes seconds.
So the iterative process I was able to go through that would normally take weeks because I'd have to send it off to my editor. She would take and figure out how to fit in her schedule and she would do it and then she would send it back. The computer was doing, the AI was doing in seconds.
So I was able to go through 20, 30, 40 iterations in an afternoon that would have taken months to do. Now, as it is, the book still took nine months to write from beginning to end. And that was after a four or five year hiatus when I did the original outline before this stuff even existed and did the research to pull the materials together.
But it still took nine months, even with the help of generative AI and other people are realizing the same thing.
Hannah Clark: This is a perfect example of using it as a tool and the difference between generative content and creative content. I think this is really the differentiator for me, is at what point is it generation and at what point is it creation?
And I think that there's just this inherently human element that, at least at this stage in the development of these tools, we're just not there yet, that it can truly replace the input of a human being. But I wanted to move on because I know we're running out of time. And you touched a little bit on ethics and the ethics of AI and guardrails and those kinds of things.
And this is something we've discussed on the podcast before, but it's something I think you have a really unique perspective on. So specifically around inherent biases and the ethics around bias, what are some of the things that we need to be thinking about as we go deeper into this state of a rapid development of AI tools and what that is going to mean for folks as they increasingly rely on them, not just to generate content and to iterate, but also to use as a learning tool and other kinds of tools that facilitate their other work?
Ken Hubbell: So I'm going to give hidden three points. One, there will always be bias. We are always biased. Everything in the world is biased. And the reason why I say that is that it's environmentally driven, it's contextually driven and the machines are learning from us. Okay? And so if you look at the body of knowledge, let's just say, Oh, well, we don't want to use anything that was done in, in the last 75 years.
So that's off the table. It's all copyrighted. We can't use it anyway. And it may be skewed one way or the other for modern politics. So now we're going to go back. Now we're bringing in the works of Shakespeare, the works of the Greek classics, the Roman classics, the, Egyptian hieroglyphs, whatever you want to do.
Now, problem with that is this, every single one of those groups, every single generation, every culture on this planet has bias. So unless you're totally creating new material that is somehow inherently not biased, and that would be a minor miracle. It's just because of, by nature, who we are as human beings.
So we will never get rid of bias. Now, can we get rid of bad bias? Well, you can filter it, but bad bias is also based upon history. Okay? At certain points in history, things were different. That's just the nature of how it works. And by the way, tomorrow the bias will change. The day after that, the bias will change and it will continue to change and evolve.
So we can't get rid of that. What we can do is flip the mirror on ourselves and say, who's driving the bias? Well, the AI is not biased. It may have bias in it, but it is by itself not biased. It has no emotional tie to anything. It has no motive at all, but we give it that. So if we want to fix bias, we need to fix ourselves.
That's the big thing. And then as far as the educational aspect of it and how that ties in, when we think about this whole thing of bias, that has to start at day one when someone's born. And we got to start thinking about how are we educating our children? How are we raising our children?
So that they don't have some of the biases that we have and do that. So it's an interesting game because it's forcing us really to look at ourselves and that's a scary thing.
Hannah Clark: Yeah. It's your son was mentioning about what he's learned as an anthropologist. You really have to be mindful of what are you inputting.
That's a really interesting perspective. But on the topic of education as well. So this is a really rapidly developing product category. Obviously, we're seeing tools coming out left, right and center. And we discussed before, there's just no definitive source of truth right now on the developments that are in progress and how you learn about them, how you learn how to use them correctly.
Do you have any recommendations in terms of resources that people can turn to for quality information about the use and development of AI?
Ken Hubbell: So you can go to my blog site because I have a curated list of articles and videos and things like that. Ironically, or interestingly enough, my father has just absorbed AI.
He's like all into it. And so he was originally just pounding me with emails of, did you look at this? Did you look at this? And I said, I'm going to hire you to put stuff into my blog and you can help me curate this thing. And so he does. He helps me curate it. And so we've got this growing list of articles.
There are so much stuff coming out right now. It is literally impossible to read it all. So I wouldn't even try. What I would do is go through, pick the things that are interesting to you, pick the things that are going to impact your world, your family, your business, and focus on that. There's more than enough to go around and it is amazing how much stuff there's a lot of stuff that's confusing.
I mean, one of the things I spoke at a mini workshop this past weekend with some educators, they're going to be school administrators. And they were like deer in the headlight, this stuff is, they know what's coming and, but they have no idea how it's going to impact the schools that they're going to be going to work at.
They have no idea if there's going to be policies in place. Most school systems have barely got their arms wrapped around what this is going to mean. One of the things I do from a consulting standpoint, along with the book is talk to schools, talk to people, talk to kids about what does this mean for you and your career and what's going to happen in your life.
How can you prepare for it? How can you be proactive in what you're doing to adapt and adopt to what's happening? Because we've all got to figure this out together. And while those of us who were in this space talk about this all the time, 95 percent of the people out there have no idea what's actually happening.
And it's not because they're stupid or ignorant or anything. It's just because it is new and it came out of nowhere for most of these folks. Even the people have been in this for a while. When ChatGPT hit, some of us were going, Hallelujah. And some of us were going, Oh my God, it's like, that was the impact.
And it's still going to be like that. It's going to be like that for probably the next 5 to 10 years.
Hannah Clark: Well, on the topic of your blog, what's the URL that we, people can find you at and other places where people can follow your work?
Ken Hubbell: Okay. You can go to thepragmaticfuturist.com and there's links to all of it from there. And that's one word, by the way, thepragmaticfuturist.com. Don't ask me how I got lucky to get that cause it just fell out of nowhere. You can go to there-is-ai-in-team.blogspot.com and yes, that is a ridiculously long name and I should have thought more about it when I did it, but it'd work. So that's the actual blog itself, and I'd love to have you come visit.
There's places where you can leave notes and stuff like that. But you know, this is something we're all going to figure out together. And the more conversation we have at the more dinner table conversations we have about this, the more humanness we have about us, the better off we're all going to be. So, share with your friends, but do it over a campfire with a cup of coffee or hot chocolate or whatever the heck you want to drink.
And really talk about it because this is not going away. We really need to embrace this. This is going to be a fun time. It's going to have its ups and downs, but I think it's going to be fun.
Hannah Clark: I agree. I think that it's a big disruption and we all just have to be ready for what's going to happen in the future.
Ken, thank you so much for joining us. This has been Ken Hubbell. I really appreciate your insights on this. This is such a fascinating field, so I really appreciate you joining us.
Ken Hubbell: I love being here, Hannah, and I wish you the best.
Hannah Clark: Thanks for listening in. For more great insights, how-to guides, and tool reviews, subscribe to our newsletter at theproductmanager.com/subscribe. You can hear more conversations like this by subscribing to The Product Manager wherever you get your podcasts.