WORK WITH DAN - INQUISITIVE COMMUNICATIONS - CLICK FOR INFO
June 14, 2024

AI Recipes, Critical Thinking, and the Imperfect Human - with Kami Huyse

AI Recipes, Critical Thinking, and the Imperfect Human - with Kami Huyse

Is AI the greatest gift ever for lazy people? I've often said that if you want to find a better way to do something, ask a lazy person. Laziness, as it turns out, is a virtue. Lazy people want to find the most efficient way to get through a task. No...

The player is loading ...
The Trending Communicator

Is AI the greatest gift ever for lazy people? I've often said that if you want to find a better way to do something, ask a lazy person. Laziness, as it turns out, is a virtue. Lazy people want to find the most efficient way to get through a task. No wonder they're taking to AI like fish to water.

In this episode of The Trending Communicator, host Dan Nestle connects with Kami Huyse, CEO of Zoetica. A self-described "lazy entrepreneur," Kami has nonetheless blazed new trails in AI adoption and implementation and is a recognized PR industry thought leader, speaker, and author.  Dan and Kami explore the transformative impact of AI on the PR and marketing industries, emphasizing the importance of ethical considerations and critical thinking. 

Their discussion highlights the necessity of asking better questions, experimenting strategically, and maintaining a human-centric approach to AI. They also delve into AI's ethical challenges, stressing the need for transparency and responsible use. 

It's anything but a lazy conversation - it's a must-listen for professionals looking to navigate the evolving landscape of AI in communications.

Listen in and hear about....

  • The importance of an entrepreneurial mindset while embracing AI
  • AI "recipes" that can help you advance from a "home cook" prompter to a true AI "chef" 
  • How prompting skills may become obsolete as AI integration in software increases
  • The necessity of critical thinking, curiosity, and a strategic mindset for effective AI interaction
  • The importance of ethics and transparency when we use AI
  • How AI can assist in content creation while maintaining a brand's unique voice and style
  • Practical insights on leveraging AI for community building and social media strategy

Notable Quotes

On Being a Lazy Entrepreneur:
- "I call myself a lazy entrepreneur in a sense, because what I've always done is I look at a problem that's in front of me, and I try to figure out a way to do it more efficiently and different."
— Kami Huyse [00:03:3300:03:45]

On the Evolution of Social Media and AI:
- "I've always been a forward-leaning person, and we did some of the first social media campaigns for big, big brands that you would know. So we did those social media campaigns early because we look at solving problems with current technology. And to me, AI is just an extension of that."
— Kami Huyse [00:05:0000:05:20]

On the Importance of Asking Better Questions:
- "We need to start to think about how to elevate those conversations. What do we need to, what questions do we need to be asking? We need to ask better questions. That's where the training comes in."
— Kami Huyse [00:12:0000:12:10]

On the Role of Imperfection in Human Creativity:
- "How do we have a pattern interrupt? I'll tell you how you throw in the imperfect human. That's how you tell imperfect stories. You talk about being a lazy entrepreneur. These are pattern interrupts that AI won't come up with on its own."
— Kami Huyse [00:30:0000:30:15]

On the Future of AI and Human Interaction:
- "We need to think about what can we bring to the table that is like a spice of human while understanding fully the AI and using it fully. So not throwing the baby out the bathwater, as we always said, but using the AI for what it's good for and throwing in your own spice."
— Kami Huyse [00:30:3000:30:50]

On the Ethical Use of AI:
- "AI isn't inherently evil, but the people who wield it are maybe so there's people that wield the AI. Here's the problem with AI in general. It's just like our society. Like, we talk about, you know, racism and bias and, you know, sexism and terrible things. Here's what AI does. It takes all of the knowledge of the world, and it boils it down and it repeats it back to you."
— Kami Huyse [00:59:4101:00:10]

On the Role of Communicators in Ethical AI:
- "We as PR communicators should be the ombudsman for our organizations. We should bring our point of view to protect the people that we represent as PR people. And the people we represent are our audience, are our customers, our stakeholders."
— Kami Huyse [01:06:2501:06:45]

Resources and Links

Dan Nestle

Kami Huyse

Books Mentioned

 

Timestamped summary for this episode (as generated by ChatGPT)

The origin of ideas (00:00:00)

Dan discusses his excitement for new ideas and tech, including the concept of creating custom GPT.

The traits of successful idea execution (00:01:02)

Dan outlines the traits and qualities required for successful idea execution, such as discipline, focus, and support.

Introduction of Kami Huyse (00:02:01)

Dan introduces Kami Huyse, highlighting her background and expertise in PR, marketing, and AI integration.

Kami's problem-solving approach (00:03:46)

Kami discusses her approach to solving problems efficiently and her focus on community building and AI integration.

Evolution of media and technology (00:09:53)

Dan and Kami reflect on the digital transformation and the importance of hands-on experiences in the marketing profession.

The importance of AI training and mindset (00:11:55)

Kami emphasizes the significance of training and developing an entrepreneurial mindset for effectively integrating AI in communication and marketing.

Ethan Mollick's principles of AI interaction (00:17:07)

Dan discusses Ethan Mollick's principles for interacting with AI, including inviting AI to the table, being the human in the loop, treating AI like a person, and acknowledging AI's continuous improvement.

The importance of asking better questions (00:18:56)

Kami discusses the importance of asking better questions to guide AI in achieving desired tasks and outcomes.

The analogy of home cook and chef (00:19:37)

Kami compares the difference between a home cook and a chef to illustrate the importance of understanding and utilizing AI tools effectively.

The need for strategic training in AI tools (00:21:40)

Kami emphasizes the significance of strategic training for the next generation to effectively use AI tools and adapt to the evolving marketplace.

The integration of AI in software applications (00:22:54)

The discussion revolves around the increasing integration of AI capabilities in various software applications and its impact on prompting and human interaction.

The evolving role of prompting in AI usage (00:24:01)

The conversation delves into the evolving role of prompting in AI usage, the impact of contextual understanding, and the distinction between chefs and home cooks in utilizing AI.

The significance of asking power questions (00:25:04)

The importance of asking power questions and the role of strategic prompting in utilizing AI effectively is highlighted.

The importance of being relentlessly curious (00:27:07)

Kami emphasizes the significance of being relentlessly curious and training the brain to notice and understand AI interactions.

The concept of almost homemade solutions (00:29:01)

Kami discusses the concept of almost homemade solutions, encouraging individuals to add their own unique touch to AI tools for personalized outcomes.

The value of the imperfect human in AI interaction (00:30:01)

The significance of embracing imperfection and adding human elements to AI interactions for unique and effective outcomes is emphasized.

The need for critical thinking and curiosity in AI interaction (00:32:02)

The importance of critical thinking, curiosity, and asking questions to drive effective interaction with AI tools is discussed.

The importance of adding personal touch to AI tools (00:33:56)

The significance of adding a personal touch and unique spice to AI tools for tailored and effective utilization is highlighted.

The significance of being a critical thinker and inquisitive communicator (00:37:05)

The importance of critical thinking, curiosity, and being an inquisitive communicator in adapting to AI tools and driving effective outcomes is emphasized.

Critical thinking (00:38:20)

Emphasizing the importance of critical thinking skills in evaluating information and avoiding deception.

Asking better questions (00:39:12)

Highlighting the significance of asking great questions to avoid monolithic content and promoting individuality.

Life experiences and personal touch (00:40:10)

Discussing the value of bringing personal experiences and unique elements to content creation to avoid generic outcomes.

Human element in AI (00:41:49)

Emphasizing the need for human input and caution when relying solely on AI, acknowledging occasional breakdowns and the importance of adding personal touch.

Ethics and AI technology (00:42:43)

Exploring the ethical implications of AI technology, including power consumption and the need for continuous input of new information.

Reading and vocabulary (00:48:27)

Stressing the importance of reading to enhance vocabulary and improve communication with AI.

Writing and editing (00:50:20)

Emphasizing the significance of being a good writer and engaging in editing to effectively communicate with AI.

Community involvement (00:52:52)

Advocating joining communities to learn from peers and stay informed about emerging technologies.

Curiosity and diverse learning (00:54:51)

Encouraging continuous curiosity, diverse learning, and open-mindedness to expand knowledge and perspectives.

Neuroscience and AI (00:57:08)

Discussion on the neuroscience behind AI interaction and the concept of neural networks.

Ethical Considerations (00:58:22)

Exploration of the societal, political, and civilizational changes brought by powerful digital entities and the focus on ethical AI usage.

Bias in AI (01:00:04)

Insight into the biases present in AI and the need for training to reduce biases, with examples of biased prompts and diversity representation.

AI Governance and Ethics (01:05:44)

Discussion on the need for external oversight in AI governance and the role of communicators as ethical safeguards.

Transparency and Ethics (01:08:24)

Importance of transparency in AI usage and the alignment of AI ethics with general ethical principles and governance.

Ethical Decision-Making (01:13:25)

Emphasis on the responsibility of individuals to inject morality and ethics into AI, and the ongoing discussion and importance of ethical considerations.

Backbone and Ethics (01:15:08)

The need for courage and backbone in ethical decision-making, with a focus on the role of communicators in promoting ethical practices.

Difficult decisions (01:16:14)

Discussion on the challenge of expressing contravening opinions and the weight of decision-making in today's society.

Tools and resources (01:17:14)

Mention of plans to discuss tools and resources, and a recommendation to connect with Kami for valuable resources.

Connect with Kami (01:18:00)

Information on how to connect with Kami Huyse on social media and her company, Kamchatka, and a mention of her live streams and AI show.

Closing remarks (01:19:52)

Gratitude for Kami's participation and an invitation to subscribe, share, and provide feedback for the podcast.

 

(Notes prepared by humans with the assistance of a variety of AI tools, including ChatGPT and Flowsend.ai)

Transcript
1
00:00:00,320 --> 00:00:49,352
Daniel Nestle: Welcome, or welcome back to the trending communicator. I'm your host, Dan Nessel. You know, I've been accused of being good at connecting dots or people or both. I guess we can chalk it up to maybe pathological curiosity. And I get excited by new ideas and new tech. And I love to ask the question, wouldn't it be great if like when OpenAI rolled out its custom GPTs last year, I asked, hey, wouldn't it be great if we could create custom GPTs that can talk? And I'm air quoting talk here, like talk like our executives. Then we would be able to draft content in their voice and style. Oh, and then we could grab topics from Talk walker and plug them into GPTs and see what they'd say. And then we'd have a treasure trove of ideas for social posts.

2
00:00:49,448 --> 00:01:34,752
Daniel Nestle: Oh, and then what if we could use them to pressure test fast ideas and so on? And I think you get the point. Well, it's one thing to have the ideas connect a few dots here and there, and it's an entirely different thing to actually make them happen. You need discipline, focus, time to test and learn patience, a simultaneously analytical and curious mind, people to believe in you and support you as you go through all these efforts. And it also helps to have a love of process, to get a thrill from a good workflow, to be the type of person who spends hours in a template library for fun. Well, my guest today is all the above and much more. Like me, she's a connector, a decades long pr and marketing pro with a love for technology and how it can elevate our profession.

3
00:01:34,888 --> 00:02:13,026
Daniel Nestle: But she's also a successful agency CEO, one of the most trusted voices in social media strategy for PR, and now has been at the leading edge of incorporating AI in novel, practical and ethical ways to solve real business problems. A truly creative and curious thinker, she's held leadership positions within the Public Relations Society of America. The PRSA has authored countless articles and co authored at least one book with me. The most amazing marketing book ever, I might add. She's the founder and of the active social media breakfast of Houston and the CEO of Zoetica. Please welcome to the show my friend Kami Hoysa. Kami, it's good to see you.

4
00:02:13,050 --> 00:02:15,906
Kami Huyse: Dan, it's so great to see you. What's up?

5
00:02:16,090 --> 00:02:34,024
Daniel Nestle: What's up, man? I love it. I love it when I have a real like, friend on the show. Not that like, all my guests are friends and they're wonderful people, but you and I know each other. And, you know, not for too long. We've only known each other a few years. You know, the book experience is one of those things.

6
00:02:34,964 --> 00:03:28,246
Daniel Nestle: But, you know, I'm really surprised that we've only known each other for a few years because, you know, once we started playing the name game, it turns out that those dots I was mentioning, those people connections are just one dot away for you and I and running in very similar, parallel, overlapping circles, but never having connected until about a couple of years ago, thanks to our involvement with, and my listeners will get bored of me saying this, the rise community and the wonderful Mark Schaeffer who has introduced us. But we become good friends. We have so many things in common. And I just love the work you're doing. And especially we'll get into the AI stuff, because when I say you're blazing territory that most people can't even imagine or conceptualize, I'm not lying. It's true. And I've seen it firsthand.

7
00:03:28,270 --> 00:03:45,384
Daniel Nestle: So I do want to get into that. But first, Cammy, welcome. And I want to give you a few minutes, if you could just tell our listeners a little bit about who you are and what Zoetica does and what your main areas of concern are at the moment.

8
00:03:46,244 --> 00:04:26,364
Kami Huyse: Thanks, Dan. I'm so glad to be here. And hopefully, you know, our conversation can help people see things differently than they do. That's what I love to do. I call myself a lazy entrepreneur in a sense, because what I've always done is I look at a problem that's in front of me, and I try to figure out a way to do it more efficiently and different. Way back when I actually worked at MHI, which was manufactured housing institute, this is many years ago now. Like, I've been in this business almost 30 years. I started with that kind of concept around digitizing our media inside of the organization.

9
00:04:26,444 --> 00:05:07,042
Kami Huyse: So going all the way from that, looking at all these slides, like, we had all these slides and large format slides, and I was putting them into FedEx packages and sending them out to members, and they were sending them back to me. And somewhere along the line, I said, why aren't we just scanning all these into a digital media and then letting people do that? And we turned something that was really kind of a, what I call a PETA into a. Into a profit center for the association. So, I mean, I've done this since the very beginning. Like, how can we take something and flip it? So at Zoetica, which is where I'm at now, we really look at the problems of our customers and try to solve them.

10
00:05:07,098 --> 00:05:49,584
Kami Huyse: Obviously, social media has been a huge part of what we do, in the sense of community building and. And that kind of thing. But one of the reasons I really focus in on communities well, too, number one, is because I believe that this is the only thing that's persistent across all of this. So I've been doing social media management for about 15 years now of my 30 year career. So I started really early. I've always been a forward leaning person, and we did some of the first social media campaigns for big brands that you would know. So we did those social media campaigns early because we look at, solving problems with current technology. And to me, AI is just a extension of that. Right?

11
00:05:49,704 --> 00:06:35,950
Kami Huyse: We have a current technology that's coming up, and I look at these technologies and think, how can I practically apply this to the craft? You know, like, how can we not? Not just to do it. I don't do AI to do it. AI. I do AI to solve specific problems. And I think that's really where I come from, is how can I solve these specific problems? Because otherwise you're just kind of spinning your wheels. I mean, it's fun to go out and play, but, for example, custom GPTs, I wasn't really building those until recently, because until we got chat GPT four. Oh. And then everybody was opened up to these GPTs. I didn't love the idea of creating a GPT for a minority of people that could use it because you had to be a member to use them.

12
00:06:36,062 --> 00:06:52,170
Kami Huyse: So why would I make a forward, public facing GPT unless it was accessible? So I have to think about these things, like, what is worth the time and what isn't. So anyway, I know that's not really exactly what you asked, but that's what I'm doing. That's what I'm interested in. Right.

13
00:06:52,202 --> 00:07:03,494
Daniel Nestle: Well, we don't have rules on the show, so you can go whatever direction you want, as you know, and, you know, as a professional, as someone who's done this kind of stuff before, what do they always tell us? You don't have to ask. You don't have to answer the question that's asked.

14
00:07:03,794 --> 00:07:04,570
Kami Huyse: Bridging.

15
00:07:04,682 --> 00:07:47,194
Daniel Nestle: Beautiful. You bridge. And you know, you own the conversation, not the interviewer. And I love that when it's true. But on this show, we really don't have any rules. It's just other than we need to have a decent conversation, and I don't think that's gonna be a problem. I love what you said. There's a couple things in there that I just. That just, you know, are already threatening to derail my original intent for their conversation, but that's fine. And, you know, like you said, I love this thing. As you said that about being a lazy entrepreneur. And people laugh when I say it's the lazy people that make the world go around and they say, well, why do all these things? I said, because I'm lazy.

16
00:07:48,774 --> 00:08:41,009
Daniel Nestle: But I'm working 14 hours a day, or I'm either working 1012 hours, and then I'm taking a couple extra hours to get up to speed on what's happening in AI. Thanks to rise, I'm able to go and see source of information that I've ever seen, bar none. All I need is a frank prendergast or a Tyler Stambaugh near me, and I get it. Or Brian Piper. They get it and they share everything and keeps me up to speed. So I'm doing that, and then I'm experimenting with custom GPTs, which, by the way, I pretty much created just for myself. I never thought of the public facing part of that. It's an interesting point. But, you know, like, because I'm lazy. Because, like, I want to generate a lot of content fast, quickly.

17
00:08:41,081 --> 00:09:24,922
Daniel Nestle: And yeah, I know that on the other end, I'm going to have to do a lot of editing and fixing, but it sure beats all the time I'm going to spend staring at a page and sweating, right? So. So, you know, I'm looking for those shortcuts. And AI is one of is like, if nothing else, you know, it's like one of those signposts in a Bugs bunny cartoons. Like, for those of us who are old enough to remember these things this way to Albuquerque. And I think that's one of the things AI is. It's like shortcut to Albuquerque. You just follow this sign and you're going to get there faster with a lot of detours along the way for those of us who get distracted by bells and whistles, like me. But you said, I love that. That's a lazy entrepreneur.

18
00:09:24,978 --> 00:10:15,234
Daniel Nestle: And another thing that you said, which is an important topic, is about how you experienced early on. You know, basically this digitalization, digital transformation, where if you're packing things into FedEx envelopes and shipping them and just, hey, wouldn't it make more sense if we could do it in a quicker, easier way and just put everything, you know, in digital format? You know, going back in the day, when you're going from brochure to website, same thing, right? I feel like the entrance into our profession, or maybe even to any profession, ours, marketing, anything, are going to miss out a lot by not having those experiences. Like, they're transformed already for the most part. I'm sure that there will be another version of that. Right. I'm sure it's going to continue to accelerate in different ways.

19
00:10:15,394 --> 00:10:33,492
Daniel Nestle: But there was something about that, about taking, like, getting your hands dirty with newsprint, you know, and pasting it into binders versus, you know, now it's just click, cut, paste. I don't know. I mean, I think it's. Maybe it's just nostalgia, but, yeah, there's.

20
00:10:33,508 --> 00:11:18,260
Kami Huyse: A lot of nostalgia there, for sure. And, like, when I was in college, I was on the student newspaper, and we would cut, you know, we would type up our stories and we cut them out. We'd put them through a wax machine and then, like. Like, roll them onto a page for our printing. No, I mean, this is how like, I mean, I've been around a long time, guys. A lot of people think I'm a young chick, but I'm not. Yeah. The truth is that I do believe that this current generation is also going to have their moment. I don't think that just because they didn't have our experiences, that their experiences aren't just as important and valid. So one of the things that I think is really important is developing your, as a mark on professional, developing your entrepreneurial mindset.

21
00:11:18,372 --> 00:12:08,474
Kami Huyse: So what is it that you are doing? You are trying to launch new ideas into a difficult space, and that's, especially if you're in corporate. I mean, that is extremely important for you to be able to put together a use case and get that out there in a way that it's going to be accepted. So it takes a lot of. Of brainpower to do that. And I still think that's the human part. Like, the AI can't do that for you. It absolutely cannot. So one of the things that I think is really important is spending time understanding the medium, which I call the AI medium. I talk a lot about AI recipes versus AI prompts because to me, and I've been talking about that for about over a year, AI recipes versus prompts, because the prompts are changing.

22
00:12:08,514 --> 00:12:49,694
Kami Huyse: Anyway, were talking about this even yesterday. We were talking about we are going to be able just to open up our phone and we already can, and just talk to the AI and conversate. Or. Conversate. That's the wrong word. Have a conversation with the AI, and that is going to help us. And again, I already am doing that with Google. Right. Hey, Google, you know, and it comes in and it helps me out. By the way, Google just responded, so I'm just saying that we are going to be like that with AI. We're going to have conversations with AI. And so we need to start to think about how to elevate those conversations. What do we need to, what questions do we need to be asking? We need to ask better questions. That's where the training comes in.

23
00:12:49,774 --> 00:13:37,590
Kami Huyse: And that's what I love, by the way. That's just my joy and love is to work with communicators to get them up to speed on that. Both my clients and also, I also have a mastermind for PR consultants and coaches to help them with this. So I'm trying to, train myself and train others on how to, really bring AI to the table if you want, for every single conversation. Like, so how can we do this and what's the right tool for that? And that's the questions I'm asking right now. And then I'm putting them out in, like, my live streams and all the things that I'm doing. I'm trying to, you know, teach or learn and teach. That's what I always have had. I've, I've done this my whole career. I learn and then I teach. I learn and I teach.

24
00:13:37,742 --> 00:13:41,862
Kami Huyse: So I think that's really important right now for all of us as Markham professionals.

25
00:13:41,998 --> 00:14:35,522
Daniel Nestle: Yeah, learn. Teach. That's the secret to learning, continuing to learn. You don't learn. You don't learn something until you're able to teach it is what they say. I've been doing a lot of AI training on, I think, what we would air quote, call the basics these days. But I don't even know if the basics are basic anymore. I think there's, in the bubble that we live in, it's basic, but for most people, it's still quite advanced. The skills are not advanced. The mindset is a whole different issue. And then understanding that you're not dealing with something that is like word or excel, it's not like a piece of. It's not just a typical software as a service platform. It is. I mean, I don't even call it a technology when I talk to people about it. I just call it some other thing.

26
00:14:35,578 --> 00:15:32,694
Daniel Nestle: And that has to be the foundation to moving forward with AI. And, you know, I like what you mean with. I like what you're saying about, you know, helping people to kind of see it from that solution standpoint, you know, like, how's it, like, what are we practically going to do? That's the approach that we need to take. And I think answering those questions or backing in from that, from the use case approach is what I mean. So, like, when we think about the practical applications of AI, we think about, like, you know, how people should be approaching it. Those two words, use case, I think are maybe the most important two words right now as we move forward. You mentioned, you talked about, you know, how prompting is. It's recipes versus prompts. I totally agree.

27
00:15:33,634 --> 00:16:26,390
Daniel Nestle: I like to think of prompts as just recorded snippets of conversation that work or game instructions, almost. It's just a replicable pattern in some ways, or dare I say, a template that you can keep using. But it's really the use case that comes out of that, where you get the value of the recipe or you don't know if it tastes good until you cook it. And it's just getting there is maybe the thing that new entrants to the profession are going to be kind of dealing with. I'm sorry, I keep rambling because there's something that I want to get to here, and that is you. Ethan mollicked me a bit there when you said invite AI to the table.

28
00:16:26,462 --> 00:16:28,558
Kami Huyse: I did a little bit. I did, I did.

29
00:16:28,686 --> 00:16:32,150
Daniel Nestle: I just turned his name into a verb. I might be the first person you caught that.

30
00:16:32,262 --> 00:16:33,274
Kami Huyse: You caught that.

31
00:16:33,974 --> 00:17:04,526
Daniel Nestle: But, yeah, inviting AI to the table, I don't think we're going to need to invite AI to the table anymore for that much longer because AI is not only at the table, but it's like part of the wood or steel or whatever the table is made of now. So look at Adobe. Look what Adobe's doing with the acrobat. You got the button. Mark calls it the button. Mark Schaeffer calls it the button. Or maybe that's Ethan Malik, too. Ethan Malick calls the button. Mark Schaeffer says, so you back up your welcome to you.

32
00:17:04,589 --> 00:17:06,998
Kami Huyse: You're talking about Ethan Malek. You better tell him who he is.

33
00:17:07,126 --> 00:17:55,736
Daniel Nestle: Ethan Malik, Wharton professor and AI genius, has done more with the area of prompting and trying to understand the limits, or lack of limits of what you can do with generative AI than anybody else. I think out there, as far as I know, just wrote a book called co intelligence, which is tremendously important. It's not a technical manual or manifesto. It is a guideline to principles of how to interact with AI and where it might be going. And the first principle was invite AI to the table. The second principle is always like, be the human in the loop or keep the human in the loop. I'm not going to quote him.

34
00:17:55,840 --> 00:17:57,040
Kami Huyse: Be the human in the loop is.

35
00:17:57,072 --> 00:18:13,302
Daniel Nestle: Be the human in the loop. The second. Right. And the third one is tree date person. You got them in front of you. Treat AI like a person. And the fourth one is basically that this is the worst AI you're ever going to see. So AI is always going to get better. Yeah.

36
00:18:13,398 --> 00:18:26,430
Kami Huyse: I do want to back up to the treat AI like a person, but tell it what kind of person it is, which is really part of it, too. So one of the things I talked about in our last conversation is that we have to ask better questions.

37
00:18:26,582 --> 00:18:26,942
Daniel Nestle: Yes.

38
00:18:26,998 --> 00:18:45,612
Kami Huyse: One of the things I do every time when I'm interacting with AI is I ask a lot of questions. I think people are used to going to a Google search and searching and ask. And we've been trained to get as specific as possible about what we're asking for, to get what we need. And I think we're pretty good at it. I mean, Google searching is something. We've learned it. Right.

39
00:18:45,628 --> 00:18:56,684
Daniel Nestle: That's the mindset, that's the mindset that I'm talking about. That's the first kind of physical manifestation of that mindset that I try to knock out the very beginning of any one of those training. Sorry for interrupt. Go on.

40
00:18:56,724 --> 00:19:34,216
Kami Huyse: No, you're totally good. So ask better questions, but I'm going to go back to my food analogy. So, yeah, you also, you have the home cook and you have the chef. What's the difference between the home cook and the chef? It's the ingredients, it's the special touches, it's the understanding of that. And so we're really working to try to go with AI from being the home cook AI to the chef AI. And it takes a little bit of effort on that part. One of the parts that really has worked for me is that after I go through an entire conversation, sometimes it takes a lot of work to get the AI where you want it to go.

41
00:19:34,360 --> 00:19:34,664
Daniel Nestle: Yeah.

42
00:19:34,704 --> 00:20:06,232
Kami Huyse: At the end of it or even at the beginning of it, I might ask it, I want you to do this. What would be the best things for me to give you to get this accomplished? Whatever the task is, I'll ask the AI what it wants for me in order to do what I'm asking it to do after we're done. If it takes me a lot of effort to get to a certain point. I'll say, how could I have gotten to this quicker with a, how could I have gotten. How could have I gotten to this quicker with a prompt? Like, what prompt would you give me to get to the result we got together more quickly?

43
00:20:06,328 --> 00:20:06,944
Daniel Nestle: I love it.

44
00:20:07,024 --> 00:20:47,814
Kami Huyse: One of the, it's one of the best little secret weapons I use all the time. And then I can take that prompt and I can teach it to others, and I have. So I am usually, when you get a prompt from me or a recipe, it has been well simmered, cooked, tested. It's been in the test kitchen, and we have to put our, and by the way, these recipes might not work. A year from now that you may need new ones, you're going to have to consistently test them and upgrade them. So think about being a baker who makes bread. Say you're a sourdough baker. My mother got really into sourdough for a while, and this is, like, a huge thing.

45
00:20:47,854 --> 00:21:32,354
Kami Huyse: Like, people sharing their batches and coming up with the best starter batch and, you know, and passing that down through the generations to the next starter batches. I mean, this is really what these AI recipes are. We need recipes versus, like, direct things, because that's why I say don't buy, like, a hundred thousand prompts for, you know, never $27, because, I mean, the prompts are fine. Probably you'll catch a few out of there. I'm not saying they're not. They're worthless. I'm just saying that in the end of the day, it's much better if you learn how to do it yourself. And so if you could get just a little bit of direction and then become that cook, if you don't want to be the cookie, I get it. You don't want to be the chef.

46
00:21:32,474 --> 00:22:16,102
Kami Huyse: Maybe somebody in your team is the chef, and you work with them on that and get them the education they need to become that chef. That's definitely something I'm leaning into right now, heavily, is how to train the next generation to use these AI tools in a strategic way. And that, I think, is what I'm very good at, is strategy. I've always been good at looking at what's coming, what's next, what do we need to do to, like, shift into the marketplace so that we take advantage of something? So that strategy piece is huge. It always has been, it always will be. I don't care what kind of technology they throw at you. You have to learn where to spend your time and where your time is not well spent. And that strategy, which is why you.

47
00:22:16,118 --> 00:23:02,064
Daniel Nestle: Are a trending communicator, by the way. That is the name of my show, and that's why you're here. It's, it's really that forward looking at, at how these, you know, these technologies, for lack of a better word, but really how the trends that are affecting our profession, communicators, what's that going to mean to the future? I do want to kind of ask a little bit more about this whole, you know, this chef versus home cook analogy, because if you tie it back to what Ethan calls the button and what Mark calls AI will come to you. Like the idea that AI is already, it's already embedded in so many things, but in any application these days, any software application, they're just adding more AI capabilities.

48
00:23:02,884 --> 00:23:48,404
Daniel Nestle: I mentioned Adobe, just, you open up a PDF and in acrobat, and you have the summarize this document thing here. Copilot's like that. Google's product is like that, where just it's in the mix, and it takes a lot of the magic out of the prompting. It doesn't mean you can't prompt and it doesn't mean that you shouldn't, but it means that if you were not, I think if you were not going to prompt in the first place, or if you were not kind of leaning in that direction, well, now you don't have to. Again, it's a lazy solution, but I feel like there's going to be, I don't think prompting is going to go away. I think that the necessity for it might decrease.

49
00:23:48,744 --> 00:24:41,864
Daniel Nestle: But I do think you're going to see a difference between the chefs and the home cooks in that there's going to be only a few chefs after a little while, because people stop working on the prompting and be like, oh, it's already in copilot. GPT 4.40 letter o I think already has contextual understanding capabilities, so that as you prompt, it sort of already starts to understand its role. And as the memory improves, I suppose, of the things that you're asking and the kind of questions you continue to ask, the AI just adjusts. It starts to understand what you want it to be without even telling it in the first place the kind of person it needs to be. But it's still more effective to tell it the kind of person it needs to be in the first place.

50
00:24:42,364 --> 00:25:39,144
Daniel Nestle: Far better to do it that way. So, you know, ultimately, AI is coming to you. AI is coming to everybody. It's in everything. Do you have to learn how to prompt? No, probably not. But right now in our profession, certainly for the next year or two, maybe in any profession, really being able to open up any of the AI apps and go back to that original question I asked, which is, wouldn't it be great if you say, hey, if you look at the empty screen, you say, wouldn't it be great if. And then the next part of that question becomes the core of your prompt, wouldn't it be great if I could design a dashboard for my leadership that would show the real value that PRofessionals give to the organization? Okay.

51
00:25:39,844 --> 00:25:54,212
Daniel Nestle: Act as a data scientist and PR expert who is, who has years of experience in corporate environments. What should I do to design a dashboard to talk to my.

52
00:25:54,268 --> 00:25:55,060
Kami Huyse: What should that include?

53
00:25:55,132 --> 00:26:49,944
Daniel Nestle: What should that include? And that kicks off the conversation, and it has to be a conversation. I love Chris Penn's work that he's doing with his almost timely news. And he was in the rise community having a discussion recently. But his framework for prompting is fantastic. The race framework, role, action, context, execute, that's almost baseline these days, but people don't think about it. But his power questions to keep asking, you know, I. For me, as somebody who talks for a living, essentially, and tell stories, I mean, asking more questions is never a problem, but I suppose that it's good for. To give people the. The kind of nudge, you know? Hey, this is the time where you would ask another question if you were talking to a person, wouldn't you ask another question right now?

54
00:26:50,284 --> 00:27:07,294
Daniel Nestle: And when I train, people or when I talk to my team about it, I just always use the word interrogate, you know, and could be a harsh word if you think about it in some ways. But no, interrogate it. Like, like, ask it, interview it, interrogate, iterate. I words do it.

55
00:27:07,334 --> 00:27:16,414
Kami Huyse: Yeah, that's the way. I love that so much. And I love that you said that, you know, a person needs to be relentlessly curious, because I think that's really part of it. That's what you said you are, and I think that's right.

56
00:27:16,534 --> 00:27:16,790
Daniel Nestle: Yeah.

57
00:27:16,822 --> 00:27:53,208
Kami Huyse: And you have to know, you have to notice things. Like, and the way that you notice things is that you prime your brain to notice them. So we have something called the reticular activating system. It's right here in your brain. And that reticular activating system is what tells your brain what to notice and what not to notice. So it's the thing then, in your brain that when you buy your new car, you see everybody driving that new car. It's how you train your brain to look. And so one of the ways you train your brain to look for these things and to understand them is to tell yourself what you're trying to accomplish. So, you know, if you're trying to accomplish a certain goal with AI, talk about that to yourself.

58
00:27:53,326 --> 00:28:10,292
Kami Huyse: Like, okay, when I'm trying to imagine here, by the way, I've always talked to myself, which may be a good thing, but in this case, because I think that's kind of what you're doing when you're talking, it's like you're talking to yourself, right? My husband's like, what are you talking about? I'm like, oh, I'm not talking to you. I'm talking to myself. Talking to myself.

59
00:28:10,428 --> 00:28:11,940
Daniel Nestle: What are you mumbling my wife.

60
00:28:11,972 --> 00:28:43,466
Kami Huyse: What are you mumbling about over there? So I think that's really important. And again, let me bring it back to my baking analogies. These days, you're right that these tools are going to be baked into all of our tools that we use. I mean, I've seen it everywhere. All of the tools are adding it. Canva has it, Adobe has it. Sometimes they char, they charge you a little extra for it. So you may or may not be using it, whatever. But it's the difference between using a boxed cake mix and create, creating a cake from scratch.

61
00:28:43,610 --> 00:28:43,954
Daniel Nestle: Yeah.

62
00:28:43,994 --> 00:29:22,022
Kami Huyse: But then there was this other movement that came along called the almost homemade movement, where you take the box cake and you add a little spice in there of your own. So what does that look like inside of the tools that you use? I think that's really going to be one of the most important questions that we have here. What is the almost homemade solution? So maybe you will never, you know, completely go all the way from scratch like I have and that Dan has and, you know, play with this because you maybe just don't, like, love it as much as we do or care as much. And I get it. I get it. It takes a lot of time and brainpower. I get it. And you have other things to accomplish as a mark on professional.

63
00:29:22,078 --> 00:30:04,862
Kami Huyse: We have deadlines and we have things to do and so on. So maybe that's just not your mindset. But what if you could think about that little bit of spice that you can add so that you're not just taking what the box gives you're putting a little bit in of what you're trying to accomplish, to make it very specific to your brand. So I talk a lot about the human centric AI. I do an entire speech about it, and I talk about this being that the human elements, because everything can be perfect now. Like, you could write perfect copy and you can write perfect this and perfect that. Already I see the post coming up that are like all of these words.

64
00:30:04,998 --> 00:30:52,248
Kami Huyse: I'm sick of them because that's, you know, it seems like that's all that AI, like, you know, pumps out are these words. I do think that some of those words are very specific to those people, by the way. I think the AI is like, you know, it's learning their style and then spitting it back. So my point is how. How do we have a pattern interrupt? I'll tell you how. You throw in the imperfect human. That's how you tell imperfect stories. You talk about being a lazy entrepreneur. These are pattern interrupts that AI won't come up with on its own. So I feel like that's really where we're headed here, is that we need to think about what can we bring to the table that is like a spice of human while understanding fully the AI and using it fully.

65
00:30:52,336 --> 00:31:18,048
Kami Huyse: So not throwing the baby out with the bathwater, as we always said, but using the AI for what it's good for and throwing in your own spice. And I think AI is a lot like a very intelligent, like, graduating college toddler. You know, like, you've seen these like small kids that all of a sudden they're going to college, you know, they're ten years old because they're just so smart. I mean, that's AI. It doesn't have a whole lot of sense, though.

66
00:31:18,216 --> 00:31:19,544
Daniel Nestle: There's not any sense.

67
00:31:19,664 --> 00:31:20,776
Kami Huyse: Common sense.

68
00:31:20,960 --> 00:31:36,104
Daniel Nestle: Well, you know, the only sense it has is that it wants to please. It doesn't have any sense. It doesn't know anything. It's a big pattern recognition machine, right? I mean, I guess there are, people will argue it does know things, whatever. That's. That's completely out of my zone for k grade.

69
00:31:37,724 --> 00:31:39,124
Kami Huyse: It's way above my k grade.

70
00:31:39,204 --> 00:32:33,094
Daniel Nestle: Yeah, I mean, I'll leave it to the scientists who are struggling with the singularity to think about that. But in our case, when we're talking about the AI mindset and we're talking. So first of all, the imperfect human, when people start to talk about, well, AI is going to put me out of work or put me out of a job, I don't think that they're understanding that the flaws are the perfections. The flaws are what makes something interesting. You don't buy. I mean, maybe you do try to find a flawless diamond, but the flawless diamond doesn't quite have the same kind of sparkle or light. Play that one with a slight flaw somewhere does. You know, it's the slight flaw makes it unique and interesting, and I think it's gonna be the same thing.

71
00:32:33,394 --> 00:32:47,146
Daniel Nestle: I love that I've written that down and circled it seven times. The imperfect human. Because that could be the secret solution to people who are worried about where they're going with AI. But that, you know, and I know.

72
00:32:47,170 --> 00:33:14,012
Kami Huyse: Your listeners can't hear us, but, like, I. And I don't. I don't see us. I have a shirt, and I didn't wear it today, but that says perfectly imperfect. I wear it a lot out when I speak to. I've been using the perfectly imperfect shirt here. Here's my point, is that, this imperfect human is so important, but I also don't do the job of, pulling slides out of a binder and putting them into FedEx packages anymore.

73
00:33:14,148 --> 00:33:14,916
Daniel Nestle: Yeah.

74
00:33:15,100 --> 00:33:57,532
Kami Huyse: Do you know what I'm trying to say? So, yeah, you aren't going to do the job necessarily that you do right now as you do it. But if you want to stay relevant, you need to start thinking about how you can apply these concepts to your job and bring the thing that you are great at. Like, you started really kindly, Dan, talking about what you think I'm great at. But every person listening today is great at something. They're great at something. You're great at something out there. Whoever this is for you, whoever you are, I know there's somebody listening that said, this is for you. You are amazing at something, and you need to really lean into what that is. And if you don't know what that is, take some tests, talk to us. Send me a message on LinkedIn. Let's talk about it.

75
00:33:57,548 --> 00:34:34,839
Kami Huyse: But let's get you dialed into your brand voice. I do a brand voice workshop for free, like, once every quarter or so. And once coming up. But the point is that if you need to go out and figure it out, because if you can lean into what makes you unique and interesting, then you can bring that to the job. You can change the way that you do your job by putting these tools into place, but also adding something in. That is your spice, that is your seasoning. And I think that's so important. That's why I talk about cooking. Cause I really, I guess I could talk about sports, but I'm not a sports person, so I talk about cooking.

76
00:34:34,951 --> 00:35:38,046
Daniel Nestle: You know, when you get a recipe book, whatever it is, in my house, we have the big book of jewish cooking, which is ancient and old and fantastic. And my wife is japanese, so she married this jewish guy. And it's not like I have this long history of only eating ethnically jewish food or food of my ancestors from Israel and eastern Europe and so on. I do like me some hummus, but that's neither here nor there, because, frankly, the food of Eastern Europe is sometimes really awful. But we have this book, big book. Anyway, here she is, japanese woman who is extremely focused on regimentation, on doing things the way they're written for the first time. So she takes the recipe, cooks it the way it is, and she's like, it's all salty, which is 100% true jewish food tends to be salty.

77
00:35:38,170 --> 00:36:20,036
Daniel Nestle: So next time around, we're stew. She cuts the salt, you know, but not enough to make it flavorless, just enough to keep it flavorful. She's like, you know what? I cut the salt. Now I'm going to have to add something else to kind of counterbalance that. And then the recipe starts to take a slightly different shape. It's still the same food, but for some reason, her matzo balls are fluffier. Or for some reason, the kugel is sweet, but not, like, sickeningly sweet, like you often get. I don't know if you know what kugel is. It's a noodle pudding, but, you know, it's. It's a. It's noodles and cheese. It's fantastic. But hers is just delightful. And my mother, who's cooked this stuff all her life, is like, I can't believe this.

78
00:36:20,060 --> 00:37:14,036
Daniel Nestle: I can't even, you know, how does somebody who's never done this in their life just get it so right? Because they're taking great recipes and adding their own kind of spice to it, as you say. And there's a long way of saying that those of us who are creating all of these prompts, all these use cases, these recipe books, the Grimoires, as Ethan Malik likes to say, the spell books, I fully expect that somebody is going to just cut and paste what I've done from one of our, you know, one of our use case libraries. They're going to cut and paste something. Oh, Dan did this one. It's about. It's about creating a news, how to create a newsletter. All right, I'll try this one. But, you know, it's not going to work for them the way they want it to work for them.

79
00:37:14,060 --> 00:37:55,514
Daniel Nestle: So they're going to have to tweak and play, and the way they're going to have to do that is by not taking anything for face value and asking lots of questions at being a critical thinker and being curious and inquisitive. The core skill that communicators have that a lot of marketers have, too. I mean, it's the thing that makes us those imperfect humans, but it's also the kind of, you know, the secret sauce in some ways of what you need to drive AI forward. So where. Where I want to go with this, Cami, is basically like, what do you think then, are, you know, we've been talking a lot of. We've been talking all over the place, really. But what do you think are, like, the key.

80
00:37:56,134 --> 00:38:19,914
Daniel Nestle: The key skills or elements that a professional in our industry needs where no matter what level you're at, I suppose, where you can break it down, but that will need for the next three, five years, not only to stay relevant, but to really be able to interact really well with the AI and get the most out of it.

81
00:38:20,254 --> 00:39:00,724
Kami Huyse: Yeah. So I think that's amazing. That's a really good question, and it's exactly what we need to talk about. So, number one is critical thinking skills. I would say that's number one is like, you cannot take everything at face value anymore, and it's really important not to for lots of reasons. AI can be deceptive, as we know, not on purpose, but it's just the way that it works, and you have to, like, put it in line. Often when I see something that AI does wrong, I go back and tell the AI, no, it's not quite like that, is it, now? And it will come back and it will re. It will. It'll fix itself. Terribly sorry. Yeah, terribly sorry. Yeah, you're right. After looking at that again, I just. You're. You're completely correct. There is no such organization as that.

82
00:39:01,184 --> 00:39:51,608
Kami Huyse: So you have to be extremely critical in your thinking. And also, number two is, and it kind of goes with critical thinking, is you need to be asked better questions. I think we've already said this, but asking great questions. Questions are everything. And the reason I say that, too, is because, just like you said, if I took your recipe for a press release or whatever and put it in, and it's. Even if I. Even if it worked perfectly, can you imagine what would happen if everybody started using that recipe? Then guess what? Every press release would be exactly the same with exactly the same elements. And I shouldn't have used a press release because that's actually been the case for eons. But. Or, say, a blog post or a newsletter article or whatever, it becomes so, like, monolithic and same.

83
00:39:51,736 --> 00:40:35,970
Kami Huyse: So you have to think about that all the time. You were talking about your. The big jewish book of cooking. There's two things. Number one, I bought this huge book from Cooks illustrated years ago. It's a recipe book for food. And they have a test kitchen. And they go into the test kitchen, and they test all the recipes. And inside of the recipes, they tell us what they tested, how they tested it, why it works this way. They tell us about the times they failed. I love this book, by the way. I've been reading this book for, like, I had it forever. So you can imagine when this came out. That's this really kind of relevant. So I would say, also step outside and have some life experiences and bring them back to the AI. So those life experiences help you a lot.

84
00:40:36,002 --> 00:41:20,624
Kami Huyse: So that'd be three. Right? Also, I was going to talk about my Nani. So my Nani, she passed away, unfortunately, about 17 years ago. But she had. She did. She made. It's okay. She made an amazing gumbo. Gumbo was her recipe. And she. We would wait to go to my nani's house to eat the gumbo, right? To eat the gumbo. And before she died, she was like, really? Like, I want to pass on all my recipes. So I have a book of recipes for my nani, and in there is the gumbo recipe. And we sat down with her, like, what do you put in it? What do you do? She didn't have it written down. It was in her head. So we've put that whole recipe together. But guess what? The first time I made that, it did not taste like my nonnies gumbo.

85
00:41:20,744 --> 00:41:22,840
Daniel Nestle: Imagine that. Imagine that.

86
00:41:22,912 --> 00:42:04,878
Kami Huyse: Because she puts a little more of this and a little more of that. Like your wife, right? She has created that. Now, I think I've gotten slightly better at it, but it's. It's just. It's just practice. So, again, you need to take your recipe, add your little elements, and don't take it just for face value. And some of these tools are going to force you down a specific road. Like, it's going to be this way or the highway. They've like, put the AI, and it's like you push the button like you said, and it comes out. The problem with that is that you cannot take that at face value, and you need to add something in that is yours. So there's always the human element that must come in. I think the fourth, he had four principles, Ethan mullak.

87
00:42:04,926 --> 00:42:37,924
Kami Huyse: And one of them was, assume this is the worst AI you'll ever use. And that's true because the AI is getting better and better. But I also think it's some of, like, some are going to. It's going to get worse sometimes. So what we've noticed and as we've used the AI is that every once in a while the AI breaks down and it becomes really stupid for a while because they're changing things on the back end, right. They're moving servers, they're changing the way they're doing the cores. There's a lot of things going on behind the scenes because this is like a super power hog. So you want to talk about ethics?

88
00:42:37,964 --> 00:43:15,310
Kami Huyse: I want to get there, but this particular technology has a huge power load, so they're definitely trying to figure that out and moving the cores because there's these cores that they use on the AI and the back end. This is the geeky stuff you don't need to know about inside the box. But what happens is things shift around and it gets stupider. We also are talking about access to information. So AI's are voracious appetites. They have voracious appetites. They need new content to be relevant and good. And what happens after some time is that the model will break down if you don't put new information into it.

89
00:43:15,422 --> 00:43:37,502
Kami Huyse: I was talking about my fear of having, like, writing sweatshops, you know, around the world where these poor people are all in a big building, like writing, you know, original content to feed the lions, if you will. Right now, they're doing it through partnerships with, like, you know, the Associated Press or the New York Times is about to do a partnership, I think.

90
00:43:37,638 --> 00:43:47,330
Daniel Nestle: Yeah, we saw some. The Atlantic and a couple other. Yeah, it's just the english language stuff. There's more.

91
00:43:47,362 --> 00:44:32,374
Kami Huyse: Yeah, that's just english language. Oh, yeah. So much more. And so I feel like because of that, we have a huge opportunity right now to understand that this is not like Google. Yeah, let me go back to that again. Yeah. It doesn't necessarily know what you're asking it. And so it may make huge mistakes because it has no knowledge about something. You know, it may have no knowledge base about anything. So it can make huge mistakes that honestly cause all kinds of problems. So we have to, like, realize how the technology works a little bit, at least, and understand that this is a different animal than a Google search. People get really surprised.

92
00:44:32,494 --> 00:44:47,126
Daniel Nestle: That's right. It's so important. People get really surprised when I tell them that the AI just wants to please you. It's not a Google. It doesn't want to give you answer. It wants to give you answer you like. And that's why it fills in the blanks.

93
00:44:47,150 --> 00:44:49,534
Kami Huyse: Well, it wants to solve your problem, right?

94
00:44:49,614 --> 00:45:46,056
Daniel Nestle: And that's why it'll fill in the blanks with stuff it doesn't know, because it's just putting patterns together that it thinks will match the pattern that you give it. It is a big pattern recognition machine, and those patterns are getting more and more complex. And, you know, the weightings and all. Getting the geeky stuff out there, too. But still, that is why it fills in this weird, wacky information. Even in the early days, whatever. Last two years ago, I was experimenting with uploading transcripts and creating meeting reports, which now you can do with one tool. But that wasn't as easily available at the time. I would grab transcripts of meetings, and I would upload them into chat GPT and have it analyze the meeting. And it was totally chat GPT came out in November of 22.

95
00:45:46,200 --> 00:46:36,784
Daniel Nestle: So this must have been march of 23, like, a couple months in. And I gave it a 15 minutes transcript. Perfect. I gave it a 35 minutes transcript, and suddenly it's telling me, well, five minutes in, Ken Lamb says, blah, blah, blah. And then Doug says, blah. And then Emily said this. Well, there was no Ken, there was no Doug, and there was no Emily. And it kept insisting, and I kept saying, no, listen again or read the transcript. You'll see that the first speaker is Troy. The second speaker is Dan. The third speaker. Oh, yes. I'm terribly sorry. That's right. I can see that now. So what was. So what happened? Well, Doug said some. Okay, there's no Doug. Right. Some get angry. That's what happened then. So now, of course, it's much better at these kind of bigger things, but not completely.

96
00:46:37,324 --> 00:46:43,264
Daniel Nestle: It's not completely fixed. So. So, yeah, you always have to be on. On your toes.

97
00:46:43,684 --> 00:46:45,244
Kami Huyse: Critical thinking number one. Right.

98
00:46:45,284 --> 00:46:49,104
Daniel Nestle: Critical thinking. Right. Just. Just do not take anything at face value.

99
00:46:50,044 --> 00:46:50,668
Kami Huyse: Yeah.

100
00:46:50,796 --> 00:47:19,804
Daniel Nestle: You know, the. I wanted to ask you to continue the list. Actually, there was something else, but we can wait because you did mention ethics, and I put a big circle about that. We'll get there. So we said, so far, there's critical thinking. Ask better questions, bring your life experiences. I don't know if that's so much a skill as just a kind of mindfulness necessity. Make sure that you're out, that the more you know about the world, the better you're going to be with this stuff. Right. But what do you think about. Were there any more that you wanted to add to that list?

101
00:47:20,184 --> 00:48:09,914
Kami Huyse: Yeah. So, critical thinking. Ask better questions, bring personal experiences. And experiment. Experimental mindset. So you're going to have to set aside some time for this. Yeah. And I think it's hard for Markham professionals to do that. It's hard for me to do that. I do think very critically about the tools I add to my tool belt. Because every tool you add takes extra time to learn. Right. And to make sure you're using it well. So I would say be experimental, but, like, strategically experimental in the sense that in order to solve a problem. Right. So I would say start trying to think of the problems you have and research ways that AI could possibly help you solve them.

102
00:48:11,594 --> 00:48:14,778
Daniel Nestle: I like that. I have a couple things to add to the list view.

103
00:48:14,826 --> 00:48:17,418
Kami Huyse: Sure. Let's go. Let's do it. Let's build it together.

104
00:48:17,586 --> 00:48:47,098
Daniel Nestle: And these are things that everybody can and should do. And I'm sure you've heard this from your parents or your grandparents or whoever your gardens are. The first thing is read. Read a shit ton anything. Read anything. Hopefully read good things. That's subjective, but read. And then read news. Read. I don't know about news. Read. Get newsletters, read blogs. Read. Just read.

105
00:48:47,146 --> 00:48:47,330
Kami Huyse: Read.

106
00:48:47,362 --> 00:49:34,146
Daniel Nestle: Whether people are writing, because you can only become a decent writer if you can read. But the other part of it is vocab, right? When you read, you learn words. You know, those funky things that. That make meaning possible and that allow you to interact with. Those are. Those are the only tech. That's your code. That's the code that you need to interact with this. With the AI, you need words. You need the right words. And the change of a word changes everything. To say something, to tell the AI that you want it to think deeply or think carefully are two different things. So what are the differences? This is why you read. So for read, get the vocab, get your SAT scores up, whatever you need to do, right? That's the first thing. These are very practical things.

107
00:49:34,170 --> 00:50:20,910
Daniel Nestle: The second practical thing is writing, writing, editing. I would say either one, being a good writer, absolutely, highly the most recommended thing that you could possibly be. But it's a fact of life that a lot of people don't fancy themselves good writers, and they have issues with grammar or style, and they get into their own heads and they're like, oh, I'm not good enough. This is not true. Grammar isn't as important to AI. You just have to be able to put your thoughts down properly. And when I say be a good writer, I mean, arrange your thoughts in such a way that it makes sense and it's clear. Be clear, be concrete, be crisp. And if you can do that, you know, you can quibble over the little details after, but if you do those things, then you're in a good place.

108
00:50:21,062 --> 00:50:59,964
Daniel Nestle: And then editing. Editing is the next level. You know, where you know what it should be or you think you know what it should be, and then you can fix it. And you should edit yourself consistent constantly when you're, as you're interacting with the AI. So those two things, you know, are, are really, are really critical. And I would just back up what you said about asking better questions. You know, you can learn what those better questions are. You can learn how to ask better questions by number one, reading. Read interviews. Read, read what people do about asking questions. There's certainly plenty of books on the topic of how to ask questions.

109
00:51:00,504 --> 00:51:39,678
Daniel Nestle: If you're not a curious person or if you don't think you're a curious person, there's certainly lots of questions, and there's always just your fundamental who, what, where, when, why, how, who, what, when, where, why, how. In fact, if you only use that word and see what happens in AI gives you answer. Just type in why, see what happens, how, and then go from there. So build your question skills one root word at a time, but build those skills, I would say those are the main things that I would add. There's more, I'm sure, but did you have anything else, Jeff?

110
00:51:39,686 --> 00:52:23,102
Kami Huyse: I mean, there's so many more. No, I was just looking up something because I was trying to figure out exactly when. But I bought my first AI tool off of Appsumo, so I am a crazy person. I go into Appsumo, which is a marketplace for tools that SaaS developers are trying to launch onto the marketplace and trying to get a bigger audience. Some of the first AI tools that I started to use were back in September 7, 2021. I bought something called word hero, which is basically an AI tool. They were using the API, like the behind the scenes AI tools, and they were terrible, by the way. They just weren't very good at that point in time. And that was in 2021.

111
00:52:23,238 --> 00:52:51,036
Kami Huyse: So if you think we're in 2024 when we record this right now, how, like many light years ahead those tools are. But I started experimenting early, like, you know, 2021. Nobody was talking about this. Like, nobody, nobody. So one of the things that you want to do is keep your eyes open for things that are coming along that you might be able to experiment with early on. That's number one, the other thing that you didn't mention was so experiment, which I think we did say, but experiment is really important.

112
00:52:51,180 --> 00:52:51,864
Daniel Nestle: Yeah.

113
00:52:52,284 --> 00:52:56,060
Kami Huyse: Number two, that which we haven't said is join a community.

114
00:52:56,252 --> 00:52:56,860
Daniel Nestle: Oh, yeah.

115
00:52:56,892 --> 00:53:34,070
Kami Huyse: So communities are great because a community of your peers are people that are like minded. You help each other. Like, you can't read everything. Like, I do. Read, read so much that it. It's kind of hard. But the bottom line is, I learned so much from you, Dan, and from, as you say, our rise community and other communities that I'm involved in as well. I'm not just involved in rise. I'm involved in about four communities that I'm very invested in different ways. No, I mean, they're different kinds of communities. Yeah. Some of them are mine. Some of them are my community.

116
00:53:34,262 --> 00:53:41,204
Daniel Nestle: I'm involved with paid society. There's a lot of communities out there that are extremely valuable and important. I totally, 100% agree.

117
00:53:41,364 --> 00:53:43,316
Kami Huyse: Yes. You want to do nobody's.

118
00:53:43,380 --> 00:54:11,164
Daniel Nestle: Yeah. Because you're not. This is not something you can do alone. And you're not in. And alone. You are not in this alone. LinkedIn is a. Is. Is in itself. It's argue. You can argue that it's not a community because it's too many people, blah, blah, but it's a great source of information from people who are like minded and. And on the same path as you. So, you know, it's, you know, in some ways, it match. It meets some of the criteria. Just that, you know, you're not. You're not as interactively connected with people, but not always.

119
00:54:11,544 --> 00:54:51,968
Kami Huyse: The hive mind is really important. I mean, you don't want to get into an echo chamber or, like, a place where you don't hear any differing points of view. But if you're trying to keep on top of a lot of different information like this. So if you're really like, oh, this is interesting to me, I'd love to get more into this. So, I mean, join us in the rise community. If you're communicator, you'd fit in great there, but I'm just saying there's ways that you can do this that don't take up all your mind space. So, again, we're telling you to do a lot of things. That list got really long, but the truth is that, number one, stay curious. I love that.

120
00:54:52,096 --> 00:55:10,566
Kami Huyse: Number two, think about this as a toolkit for you to solve the problems that you need to solve and keep your mindset around that, and you'll see the answer to your problems in the different tools that you see in whatever. Be experimental and really like, lean into a community.

121
00:55:10,750 --> 00:55:50,826
Daniel Nestle: I'd also, another thing popped mind is to keep in mind that because something you said to not take up all the mind space, this is not a zero sum game. You know, when you have a zero sum game mentality, you will lose. It's not gonna. Nothing's gonna work for you. You're gonna run into problems and you're gonna, you know, end up feeling victimized by the advent of these new technologies that are taking all your time away from other things. Everything that you do with AI, almost everything, can be overlapped, duplicative things that you can do that help you in other areas. So, like, when you read, you're not, I didn't say read all about AI, right? You're. You're reading a business book.

122
00:55:50,850 --> 00:56:16,126
Daniel Nestle: You're reading, you're reading a teaching manual, you're reading a history book, you know, a novel, whatever it is that's going to help you with AI, guess what? It's also going to not have anything to do with AI in the current time of day, and it's not taking your mind space in that way. So the kind of things that you. That, that we're suggesting you do, apart from sitting down and experimenting with AI, are things that are, you know, stuff you do anyway, or should be doing anyway.

123
00:56:16,310 --> 00:56:45,658
Kami Huyse: I mean, for example, right now, I'm like, deep, deep into healthcare. Like, you know, I've learned a lot about my brain and, like, how when you sleep, your brain is like a washing machine, and all your spinal fluid comes up into your. Your brain, and I've into the eating habits I, you know, intermittent fasting, what that's all about. I mean, I spend a lot of time on things that have nothing, nothing to do with this. But you know what that does? It opens my mind to different ways about thinking. It doesn't matter what you're passionate about.

124
00:56:45,826 --> 00:56:55,530
Daniel Nestle: 10,000% agree with that. You know, every time you learn something new, you open up all kinds of new neural pathways, reticular, activating system.

125
00:56:55,562 --> 00:56:58,134
Kami Huyse: Activating system. How do you think I know about that?

126
00:56:59,414 --> 00:57:38,104
Daniel Nestle: I've had the wonderful doctor Laura McHale on my show a couple episodes back, who is a neuroscientist for communications, and she's just, if I remember, 15% of her book, that puts me in such an interesting place. When I think about the way we process information, the way we communicate with one another. From a neuroscience standpoint, AI is one of those things. And the way that you interact stimulates your brain in different ways. And understanding how that works is also another kind of, I think, very helpful point. I'm not saying go and learn neuroscience, but I'm not saying don't go and learn neuroscience, by the way.

127
00:57:39,004 --> 00:58:21,774
Kami Huyse: Oh, and it's not a bad thing. Let me just, let me add into that. So I had a boyfriend way back in the day, before I was married, and he was a neural networker. So that was the beginnings of what became AI. Right? So the neural networks were learning networks that we're learning, and that's really what AI is anyway. Neural network, it's learning from itself. And so that was more than 30 years ago. So if you think about that, this is how long this technology has been cooking, but that is how it's based. It's based on the idea of the way that we learn. Our brains learn. They were trying to duplicate our brain, and that's really what AI is. So I love that part of what it is learning for us, too.

128
00:58:22,074 --> 00:58:32,124
Daniel Nestle: Yeah. And it has a learning mindset you might as well, too. Speaking of communities and learning and all this kind of stuff. Let's talk about ethics. How was that for a segue?

129
00:58:32,584 --> 00:58:34,160
Kami Huyse: That's such a segue. Good job.

130
00:58:34,192 --> 00:59:08,254
Daniel Nestle: I did want to know. I meant to get there. I mean, we have all these powerful digital entities that we're working with now and again. I struggle to call it a technology, even though it's so, it's based in technologies. They're just, they're the wheel. They're the printing press. I think they're just, they're societally and. Is that a word? Societal, political? There's civilizational change.

131
00:59:08,794 --> 00:59:09,574
Kami Huyse: Yeah.

132
00:59:10,434 --> 00:59:48,014
Daniel Nestle: Brings with it a lot of baggage. And I know that a lot of your work is focused on doing this ethically. And I know that we're wrapping, it's terrible to give it such short shrift, but what are your thoughts now? I mean, we've heard a lot about ethics and AI, and I know that people, we can talk about deepfakes. Everybody understands this. What do you think are the big flags right now or the things that we should be paying attention to from an ethical standpoint, from a danger. Danger standpoint with AI, especially with the late, in light of the new, latest announcements?

133
00:59:48,394 --> 01:00:35,694
Kami Huyse: Yeah. There's some new stuff that just come up. So I want to just start by saying AI isn't inherently evil. But the people who wield it are maybe, you know, so there's people that wield the AI. Here's the problem with AI in general. It's just like our society. Like we talk about, you know, racism and bias and, you know, sexism and terrible things. Here's what AI does. It takes all of the knowledge of the world and it boils it down and it repeats it back to you. So if you take all of Reddit, all of, like, all of these different places, and you boil it down and you give it back to me, which they have, there's going to be some biases in there because we are biased as humans. It's, all it's doing is reflect. It's like a big mirror.

134
01:00:35,734 --> 01:01:17,284
Kami Huyse: I always bring this out when I'm talking to people. It's like a big mirror. AI is a mirror to our, I know you can't see that, people, because, but I have a mirror that's a mirror. I pull it out once in a while. So, you know, AI is a mirror and it's just reflecting back to you what it sees. Right? So we are, as a society, we need to train it. And so guess what that means. There's a lot of money and time that's going to have to go into training it. So on the micro level, we can train it. So I always, I have a couple of prompt words, or prompts that I put in. They're really not big prompts. I put in CEO and secretary, and I put in doctor and nurse, and I've done it once every year.

135
01:01:17,404 --> 01:02:08,302
Kami Huyse: I talk about this in my presentation, and I show you how things have shifted. Not much. A little, but not much. And so, generally speaking, you can imagine what the AI thinks a CEO looks like and what it thinks a nurse looks like versus a doctor or a secretary versus a CEO. And it is, as you exactly think, a white male, young, chiseled jaw, the CEO with a nice suit on, standing there like he's owning the world. And then there's the secretary, beautiful young, beautiful hair, and usually scantily clad, sort of, but business scanty. Same thing for nurses and doctors, really. Although this year at least they had, like, the guy was a little bit, one of them was a little bit like, you know, balding, and he looked a little more geeky.

136
01:02:08,398 --> 01:02:10,606
Daniel Nestle: What are you using to do this? Which, which tool, by the way?

137
01:02:10,670 --> 01:02:48,994
Kami Huyse: I've been using mid journey, but I put it in almost any other tool that I have to, just to see what all the different tools do. So it would be good for you. To have a bias prompt is what I would call it for yourself, for whatever makes sense for your world that you live in. So I would do a bias prompt and test it, like, once or twice a year. And then, you know, if you want the thing to be unbiased, you're going to have to create prompts that help it not be biased. You know, you're going to have to tell it what to do. And it is weird to have to do that. Like, you know, give me a diverse group of blah, blah, and then they'll say, okay, here's diversity. It was all white men.

138
01:02:49,034 --> 01:03:24,040
Kami Huyse: Now it's like all white men and one white girl. I mean, you have to be, you kind of have to be almost sounds like, racist and sexist yourself to get it to do what you want it to do. So that is going to be an comfortable for you, I think, generally speaking. But I think it's really important. Dove did a really great campaign around this. They've actually created a, the real beauty campaign has created a prompting book to help you create more realistic characters. So you might want to grab that and put it in the show notes, or I can give you the link to it.

139
01:03:24,072 --> 01:03:24,400
Daniel Nestle: That'd be great.

140
01:03:24,432 --> 01:04:12,214
Kami Huyse: But the dove real beauty campaign is obviously always been about trying to show beauty in different forms, like, you know, weight sizes and face types and ages. And they actually have put together an awesome prompting book to help you start with that. And so it's a recipe, a prompting, they don't call it that, but to me, it's a recipe for how to create more, you know, realistic characters. So that's on the prompting side for images. Right. And then let's just go all the way into that. There's been a lot of case law around whether images have been stolen and so forth. And what's ended up happening, as it always does, is there. These LLMs are starting to do business deals with different kinds of shutterstocks and different kinds of places. Okay? So the deals are starting to happen.

141
01:04:12,634 --> 01:04:57,714
Kami Huyse: But yes, a lot of these has been trained because that's how you do an AI, right? You train in AI have been trained on scraped data from the Internet, meaning I did not get your permission, but, I mean, it's free to. All right, you can come to my website and read all my stuff. And so people are starting to put your robots on their sites that say, don't come and take my stuff. So you can, there's, like, ways to do that and that's going to happen. So I do feel like that there's going to be a need for novel stuff. And there's been a lot of things that have happened in the last few weeks. I don't know if you want to get that non evergreen about this, but Sam Altman, who owns OpenAI, has been problematic.

142
01:04:57,794 --> 01:05:44,312
Kami Huyse: I mean, you know, they tried to get rid of him as a CEO because he wasn't communicating with his board. And his board then said, hey, if you're not going to communicate with us, we're going to get rid of you. And unfortunately, or fortunately, I don't know, his team said, no, we won't stay unless he stays. And so guess what? They threw out the board, and now he's put together a new safety team that includes himself and people that work at OpenAI. And that's not going to work. You need outside people to make sure you're not doing, you don't see your blind spots. And I do think that absolute power corrupts absolutely. It's just, it's just human nature. And that's the thing we have to look at here, too. I'm not saying that Sam Altman himself is corrupt. I don't, I don't even know.

143
01:05:44,328 --> 01:05:58,014
Daniel Nestle: But there's, there's fundamentally a problem. We've seen this in every single societal structure that has ever existed. There's fundamentally a problem when the person in charge is also the person in charge of policing the organization.

144
01:05:59,074 --> 01:06:32,870
Kami Huyse: Right. And back in the day, you know, we had ombudsman, you know, for newspapers. And the ombudsman's job was to look at this thing from the community's point of view. And that is what I'm saying, that we should be as communicators. I've said this for years. I have a blog post that goes back to 2008 or something that talks about we as PR communicators should be the ombudsman for our organizations. We should bring our point of view to protect the people that we represent as PR people. And the people we represent are our audience, our customers, our stakeholders.

145
01:06:32,942 --> 01:06:33,270
Daniel Nestle: Customers.

146
01:06:33,302 --> 01:07:12,194
Kami Huyse: Yeah, our stakeholders. I mean, yeah. So that's just the bottom line. I feel like that has not changed and it becomes even more important now. So I think as, from the ethical point of view, that is our job as communicators is to say, okay, we're using this, and here's the things we're going to put into order. PRSA has a really good look at this from the point of view. Of the code of ethics for PRSA. So that's another link I can give you to put in your show notes. But the code of ethics have given you some ways to think about each of the ethics in light of AI and how you're using it. So I just think those things are important.

147
01:07:12,354 --> 01:07:58,892
Daniel Nestle: I agree. You know, Anne Green, she's CEO of GNS business Communications, GNS strategic business communications. I'm going to get that wrong. I'm sorry, Anne, but Anne was on the show a couple of weeks back, and she's very involved in ethics for our profession and now ethics for AI. I hear a lot about it. And of course, being a practitioner and somebody who uses AI all the time, my approach is always be transparent about what you're doing. Make sure that people know that you're using AI, even if it's an image. I generate images for our company newsletter, for example, that we post to LinkedIn, and I insist that there's a caption under the newsletter that says AI image generated by mid journey. You just have to put it out there.

148
01:07:59,028 --> 01:08:23,264
Daniel Nestle: And I also insist that the image cannot be in the style of any particular painter or somebody who's famous. And even though you can't guarantee that there won't be an element in there of something, I mean, I'm pretty sure that we're in a good place. But more than, but more than that, I'm absolutely sure. I'm not intending or trying to steal anything, and I'm not doing anything really. I'm not putting anything out there that's controversial.

149
01:08:23,844 --> 01:08:24,876
Kami Huyse: Do no evil.

150
01:08:25,060 --> 01:09:16,856
Daniel Nestle: Yeah, do no evil. I mean, and the rules that govern AI ethics are basically the same rules that should govern ethics. Like, period. Like you need good governance. You need to understand, use common sense a lot, but you need to understand that when you put something out there that you're misrepresenting, you're doing something that is terribly wrong. It may not be illegal, Zuck, but it's certainly, certainly wrong. I mean, there's a lot to this. And I think we have a long way to go. I don't even understand why there was ever a question about, oh, what are the ethics of doing a, a video of Kami, of digital avatar kami in AI that sounds and looks exactly like Cammie and that speaks like Cammy, and we can just use her to do a commercial. What are the ethics of that?

151
01:09:16,880 --> 01:10:09,830
Daniel Nestle: Why are you asking that question? That's clearly unethical. Unless Kami says, go ahead, and do it. I love this. Please do this for me. That's the only thing that makes it ethical. It astounds me sometimes. And I suppose that people who have. I suppose there are people in the world who have little experience or have been somewhat corrupted by TikTok, and I'm not a fan, have very different views of what constitutes the acceptable behavior. And I think that's a learning that has to happen. If you think about the transformation we talked about before, earlier on, about new entrants into the profession, what's their transformation challenge going to be? Maybe this is it. It's the transformation of no ethics to ethics. Maybe it's okay if, and hopefully when TikTok closes down.

152
01:10:09,862 --> 01:10:55,124
Daniel Nestle: And yes, I could be on the record of saying, I don't like TikTok and I want it gone, and not all, and not for any political reasons. It's just because I think it's terrible and evil. Having seen teenage girls go through this whole thing, the transformation from, okay, TikTok is no longer available to us. How do we do content now? What do we do with all those videos? What do we do with all this stuff? That's a type of transformation in and of itself, because it's billions and billions of bytes of data right there. So get off my soapbox now. I'm sorry. Ethics is a very, it's certainly critical, core issue. Part of, and our profession should be the safeguards of this. I agree with you. We should be the ombudsman.

153
01:10:56,104 --> 01:11:34,578
Kami Huyse: Yeah, I agree. And as far as TikTok goes, I don't 100% agree with you. I do, and I don't, because there's just a lot of people that have built their livelihoods on there, and I've seen some positive things, too. People that have made their music careers, and even in my own teenagers. But there is a great opportunity for evil on many platforms, and I've seen it on many platforms, and so we have so many things that we are not doing 100% well, because the technology moves so much faster than we can even really understand, especially in the legislation of the platform.

154
01:11:34,666 --> 01:12:16,286
Kami Huyse: So the fact that they move so quickly on the legislation for this, I don't know if they're going to get do away with TikTok per se, but I think that the process of this will change it in some way, which isn't a bad thing. I mean, I think those kinds of changes are, those kinds of constraints on technology are so important. And AI is like, absolutely that way. I've been talking about this the whole time. I talked about it about in social media back in the day. I talked about have guardrails. I always showed a picture of a road with, like, the guardrails on the side because you have to have guardrails on the way that you use this technology. So I love that the governance is so important for your brand and for the country.

155
01:12:16,350 --> 01:12:33,484
Kami Huyse: I mean, for any country, for the, I mean, and the European Union is doing so much better at this than we are as far as putting some guidelines in there. And I actually really want to thank them directly because it's actually making it better here as well. Those guidelines have helped us.

156
01:12:34,184 --> 01:13:25,514
Daniel Nestle: Especially if you're in a global company, you have to stick to those guidelines. Yeah, I mean, everything comes with pros and cons, but on the whole, I think we as humans and we as the. As the imperfect human in the mix, the flaw and the diamond, we also have the capability of finding those flaws and identifying those flaws. Like knows, like. Right. And that's why we need to be on guard for ethics. We can create as many GPTs as we want to kind of analyze things for ethical concerns. But ultimately, an algorithm is not ethical. An AI is not ethical. The people behind it are the ones that are injecting it with whatever demonstration of morality and ethics that it has. That's why we really need to be on guard. And this is going to be an ongoing discussion.

157
01:13:25,934 --> 01:14:14,002
Daniel Nestle: And there's always going to be this boundary zone, this gray area where we can say, oh, we should allow that, or we shouldn't allow that, but we don't need to function in that zone. We're usually not functioning in that zone most days, 90% of the time. 95% of the time. I mean, I'm sure there are. People's job is completely in the gray zone, fine. But for the most part, those of us who function in our day to day life, serving our clients, working for our corporations, doing the things we do, we are not anywhere near the gray zone of ethics. So the decisions that we make should be very clear. Don't copy shit. Don't claim things that aren't your own. And, you know, don't misrepresent your company. Don't do things without people's permission. There's a lot. There's a long list of don'ts.

158
01:14:14,098 --> 01:14:18,962
Daniel Nestle: And do the right thing, you know, be respectful. You know, these, yeah.

159
01:14:18,978 --> 01:14:56,904
Kami Huyse: And I think. I think the don't be evil is like, such a don't be evil, you know, top line of it. Of course, now, what you define as evil is interesting, but don't be evil. Like, don't be evil. I think that's really important for communicators. We have to be careful. And by the way, part of it is, like, actually standing up to your executives sometimes. Because I've had executives ask me for things, I'm like, no, you can't do that. Yeah, no, absolutely. You can't do. Then here's why you can't do that. Here is going to be the effect of you doing that. By doing that. Yeah, it's a shortcut. It's great for us in the short run, but, like, here's what you're going to. Here's the cost, the potential cost of that.

160
01:14:56,944 --> 01:15:42,484
Kami Huyse: And I always talked about this, like, you know, the Arthur page is a really good example of that. We were in doing some talking about that beforehand is that Arthur Page had principles, the seven principles of Arthur Page. And, you know, I say the. But the first principle is grow a backbone. You have to have a backbone to get those principles into place. And so that's really important. Like, as a communicator, we have to grow a backbone. Which often means, you know, are we making ourselves unpromotable? Are we making ourselves, you know, be less of likely to succeed in a corporate environment? I mean, it could be. And so it takes some courage, it takes some uncommon courage to be ethical in this way.

161
01:15:43,464 --> 01:16:21,364
Daniel Nestle: That is a huge topic to discuss. And sometimes I do feel like an invertebrate. I lose my backbone. Yes, sir. So I'll do whatever you say. But over time and experience, I realized that there's a lot more harm or a lot more self harm in not expressing your concern than in expressing your concern. I'd rather be. I'd rather be labeled on occasion, and this is never me, by the way, I mean. But I would rather be labeled the, you know, the cynic or the difficult.

162
01:16:21,404 --> 01:16:21,668
Kami Huyse: Difficult one.

163
01:16:21,676 --> 01:17:08,044
Daniel Nestle: The difficult one, you know, difficult than a pushover or than the yes man. I mean, it's like. It's just the yes person. It's just. It's very. It's a big decision to go. It's a big decision to express an opinion of any kind these days. But it's a really big decision to express a contravening opinion or a forceful kind of thought that this is wrong or this is not going to go the way you think it goes. Speaking of this is wrong and not going where we think it is. That is absolutely not the description of this show. It has been fantastic, but I think keeping people on for much longer could fall under the definition of evil. So I don't want to do any evil anymore.

164
01:17:08,164 --> 01:17:49,978
Daniel Nestle: I want to make sure people get on their way, because I could keep talking to Kami for hours, as you can see. And, I mean, normally I ask, what keeps you up at night? We've talked about that. We've talked about the ethical concerns of AI. We've gone through a lot of really fascinating information about skills that we need. I had this beautiful and brilliant plan to go through a list of tools and come out, and this. You guys could have listeners out there could have this, you know, stop and keep taking notes about, oh, I need to get this one, need to get that one. But you know what? I'll ask Kami for a resource, and we can attach to the notes or something. Because believe you me, whenever I need to think about, okay, what tools should I be using?

165
01:17:50,066 --> 01:17:51,906
Daniel Nestle: I kind of check out what Cammie's doing and that.

166
01:17:51,930 --> 01:17:57,974
Kami Huyse: That I've got my AI recipe book. It's. It's like, I can send you my a recipe, link. Oh, that'd be terrific.

167
01:17:58,674 --> 01:18:46,736
Daniel Nestle: Yeah, that would be terrific. But in the meantime, I suggest that everybody out there connect with kami when you get a chance. And having lived in Japan for such a long time, when I first saw her name, I thought it was kami because k a m I is kami in Japanese. But it's Kami, and it's a dutch last name. I will spell it properly in the episode title so you'll know how to find her. On LinkedIn, it's Kami. On LinkedIn, it's Cammie Watson Huisa, and you'll find her pretty easily. Or just go to kammichat.com, which is probably the easiest solution. Check out cammychat.com. Dot. All of her links are there. On one page you can find her. That's how you can get to LinkedIn and two clicks instead of one.

168
01:18:46,760 --> 01:19:13,162
Daniel Nestle: But go for it at Kamichat on most of the socials, Kamichat and zoeticamedia.com for her company. And check it out. It's all incredibly important things. And buy the most amazing marketing book ever if you want to see some of her writing and mine almost side by side. So, Cammie, did I miss any of that? Did I miss any of your.

169
01:19:13,218 --> 01:19:20,066
Kami Huyse: No, you did great. And Cammy chat is a great place to go. Cammiechat.com, because at the very top is also the AI recipe. Just ps.

170
01:19:20,130 --> 01:19:20,658
Daniel Nestle: Oh, beautiful.

171
01:19:20,706 --> 01:19:49,306
Kami Huyse: There we go. So it's a list of links where you can find me in all the places. I do a live stream on Thursdays that really is all around social media and communications and how to and AI. I'm actually doing AI show once a month called AI Smart Sparks and so I'm excited about that. So that's something new. And yeah, that's coming along. So yeah, come find me. I would really love to talk to you. I'm on all the socials, wherever you are. I am too.

172
01:19:49,450 --> 01:20:34,694
Daniel Nestle: Terrific. And definitely do that. You know, clearly Kami is somebody you want to connect with. And you heard it here, folks. On the trending communicator, a certainly trending person, Kami Hoisa. Thank you so much, Kami. I really appreciate your time. I'm glad you're here. Thanks for taking the time to listen in on today's conversation. If you enjoyed it, please be sure to subscribe through the podcast player of your choice. Share with your friends and colleagues and leave me a review. Five stars would be preferred, but it's up to you. Do you have ideas for future guests or you want to be on the show? Let me know@danningcommunicator.com. Thanks again for listening to the trending communicator.