Transcript
1
00:00:00,360 --> 00:00:53,598
Daniel Nestle: Welcome, or welcome back to the trending communicator. I'm your host, Dan Nessel. You know, let's face it, we're in the thick of an unprecedented trust crisis. So unless you've been off the grid for the last ten years, you've probably noticed that trust in our institutions, especially government, education and media, is in freefall. And there's plenty of research, polling, countless studies, that attempt to explain why. It's geopolitical, it's ideological, it's because the people over there lie or the people over there distract. It's systemic gaslighting, it's denial of facts, it's false narratives, it's dark money, it's algorithms. Whatever. The point is that everywhere we look, we're finding more and more reasons to doubt that the information we see and hear is trustworthy. Misinformation and disinformation is everywhere, and it's only getting worse.
2
00:00:53,734 --> 00:01:39,602
Daniel Nestle: Technology, and especially AIH, has enabled bad actors to disseminate bad information at scale through deepfakes, AI enabled content farms, and oh, so many bots. This is the world we're contending with. And as communicators and marketers, we're often on the front lines of this information driven trust crisis. So what can we do about it? Well, maybe my guest this week can help us out. A pr and marketing pro who cut his teeth on the agency side in the UK, he's built his career in Israel's vibrant startup community, leading pr and marketing for both B two B SaaS and consumer brands. Today, he leads marketing for Sybra, the leading media monitoring platform that detects, uncovers and mitigates against malicious actors. Bots, bot networks and generative AI content.
3
00:01:39,786 --> 00:01:48,710
Daniel Nestle: Here to help us make sense of the good, the bad and the fake online, please welcome to the show Rafi Mendelsohn. Rafi, it is good to see you.
4
00:01:49,500 --> 00:02:09,412
Rafi Mendelsohn: Thanks for having me on. I've really been looking forward to it and looking forward to getting into the topic because I know you really are. Everything you talk about and write about is kind of thought leadership, but also getting into the weeds of Genai. And I know a lot of people come to you, so I've been looking forward to getting under the hood of this area in particular.
5
00:02:09,596 --> 00:03:00,986
Daniel Nestle: Oh yeah, I mean, it's fantastic. You know, here the trend in communicator is really all about the things that communicators and I suppose marketers really need to be looking at as they move forward in the future to be successful. And AI is a huge part of that. And I keep joking that I'm a one trick pony. I keep talking about AI, but it's turning out that's not always the case. Of course, there are pr, especially comms. We have so many different elements of our roles that are constantly changing. But we still have to understand the fundamentals. We still have to tell stories. We still have to be the leading custodian of corporate reputation and corporate brand. And we must protect our clients and our companies and our executives, et cetera, all the time and many more things. We have to be good writers.
6
00:03:01,018 --> 00:03:46,214
Daniel Nestle: We have to be good editors. We have to do all these, understand where people are talking, what they're talking about, get ahead of it, try to join those conversations. Right now, when we look at that environment, we look at that landscape, we don't know who we're talking to, and we don't know whether we're telling the right stories or do we have the right facts on our side. And I like to think that we do. But those audiences that we're speaking to don't always believe that we have the right facts on our side, because the whole thing has been corrupted right over the last decade or so. And that's kind of where you guys in Sybra come in, I think, to help us out. And I want to get into that.
7
00:03:46,222 --> 00:04:40,536
Daniel Nestle: But first, I want to learn a little bit more about you, Rafi, and just tell our listeners, first of all, we met thanks todd Grossman, wonderful human who's been on the Dan Nestle show before, and I hope to have him on trend and communicator at some point. And Todd is, as anybody knows, Todd is a connector extraordinaire and really excellent at just understanding, especially with history in monitoring and in platforms. He really understands the technology and I guess, the social media listening and media monitoring environment more than the vast majority of people on earth. So when he said, you need to talk to Raffi, I didn't even think twice about it. I got to talk to Raffi. But why don't you tell us about you, Rafi? You got a fascinating background, and just give us the quick overview of who Rafi Mendelssohn is. Wow.
8
00:04:40,608 --> 00:05:37,572
Rafi Mendelsohn: Okay, sure. So, London born and bred, started my career working at PR agencies, including working in the London office of Ketchum, where I worked in the corporate and tech teams. Absolutely loved every second of it. The amazing education being thrown in the deep end and getting to work on these amazing opportunities and responsibilities. Brands ranging from classic communications programs to thought leadership and speaker bureau and crisis planning crisis communications with large companies and also with governments, and so really loved all of that. And being surrounded by people where they have so much experience, years of experience, like journalists from whatever industries that they're coming from. And over the last, say, 910 years, I've been moved in house, but particularly in early stage startups, global startups that are looking to grow and scale around the world.
9
00:05:37,716 --> 00:06:26,000
Rafi Mendelsohn: I think I've always tried to be challenged and challenge myself and look at what bits I am missing in terms of my experience and adding strings and I think sprinkle in a little bit of imposter syndrome. And that drives interest in doing lots of different things where you think, well, you, what does that look like on that other side? And are they really as expert as they say they are? And so working more in a marketing role and at cyber VP marketing and adding all of those, as well as continuing to work on all of the communications foundations with that. In terms of PR and communications, my love, I suppose, is for data storytelling, and being able to take data and being able to tell amazing stories in different ways that captures people's attention and interest.
10
00:06:26,380 --> 00:07:04,428
Rafi Mendelsohn: And I think it's just such an interesting and unexplored still, really. I mean, so many people do step as data storytelling, but the way that we can do it and present it and the opportunities to get different types of data in really creative ways, I think is amazing. So I've always really loved that. And that's been the thread that's run through, but also in terms of the marketing and going in house. I know there's always a debate of communicators in house versus agency, but I've loved going in house. I love both sides, actually. But in terms of going in house, where you really get to understand the nuts and bolts, not just of communications, not just of the marketing, but also of the actual fundamentals of the business.
11
00:07:04,564 --> 00:07:44,258
Rafi Mendelsohn: And I think that's where we, as communicators, we're always talking about having a seat at the table and having that recognition and the buy in from senior people. And I think the more we can align what we do with the goals of the company, I mean, ultimately it comes down to revenue, often does, nearly always does. But how you align that with the revenue and make that connection so that everyone can see it. And I think that's been particularly interesting for me, working in house in a marketing role, being able to really understand, get it kind of as sometimes I always say we're flying the plane and building the plane at the same time. You get to work on the nuts and the bolts, but also on the strategy.
12
00:07:44,434 --> 00:07:53,670
Daniel Nestle: Well, as the vp marketing now for Sybra, do you also oversee essentially the coms role as well as part of your team?
13
00:07:54,490 --> 00:08:42,488
Rafi Mendelsohn: Yeah. So we're really lucky we use our platform that's just so full of data. We don't need to be subjective and opinion based. We can just simply lay out the data, as we sometimes say internally, we're kind of the Batman of that social media world, right, where we can just say, here's the information. And, yeah, we're shining a light on dark corners of the Internet, of social media. And so, yeah, that's absolutely a massive role. And this and misinformation is increasingly coming up in pretty much every type of conversation, from personal to professional to commercial, business wise, but also to political missing disinformation seems to be seeping into every aspect of our lives. So it's just also an interesting topic. And what's really interesting from a journalist point of view is just the number of journalists that have the title of disinformation reporter.
14
00:08:42,583 --> 00:08:44,740
Rafi Mendelsohn: And that certainly changed over the last few years.
15
00:08:45,120 --> 00:09:50,740
Daniel Nestle: And that goes back to what I said earlier about trust. I mean, the fact that journalists are investigating journalists essentially, or journalists are constantly looking at, they're being fact checked, they're checking facts, they're looking at information, they're taking information at face value when they shouldn't be. There's a whole list, or a whole, I don't know, a cornucopia, if you will, of problems right now in media and in the journalism field, especially when it comes to misinformation, disinformation. So why don't we just start there? And first of all, Syhabra is uncovering the good, the bad, and the fake information online. But first of all, how did you get interested in that particular field? Because you've worked across different industries, and Sybra, I guess it qualifies still, I guess as a startup in some ways, but it's a global company.
16
00:09:51,370 --> 00:10:23,560
Daniel Nestle: You know, your platform is now being used across companies large and small. So I think you're a little beyond the typical image of what a startup might be. But how are you dealing with this misinformation disinformation things? So I know there's a long way of getting to the question, but why don't we start with, let's define what misinformation is, what disinformation is, and why journalists or institutions are having problems with this.
17
00:10:23,860 --> 00:11:13,288
Rafi Mendelsohn: Yeah, absolutely. Let's dive straight in. But also what's interesting in kind of just a few minutes of our conversation we've touched upon related, but also sometimes separate conversations about trust and truth, and the tone and content of our conversations that we have online and the various different stakeholders, from individuals to journalists to politicians. And I think sometimes it's interesting to combine all and then sprinkle in some gen AI. Of course it's interesting to combine, but often those conversations seem to intersperse, but it's also possible to dissect them as well in terms of understanding the frameworks of what we're looking at. So cyber, we are always looking at it from the perspective of how are malicious actors operating. The company was started, as you said, it's not a one or two year old startup, it's seven, eight years old.
18
00:11:13,464 --> 00:12:06,608
Rafi Mendelsohn: But the platform technology was created with governments in mind. So the idea to work with intelligence officers, and we're considered an Austin tool and open source intelligence, which just means that we collect publicly available information and no private information, and we work with intelligence officers to be able to identify threats coming from social media. Now, when the company was started, or a few years ago, you said the word disinformation. Everyone said, yeah, we get it. Elections, right? That's when it's used for, that's when it's relevant. And it was, and it is, but it's not just. And so disinformation is the purposefully created spreading of false information, false facts, false content, by malicious actors who have the intent of sharing information or misleading people through that information. Misinformation is when people unintentionally share.
19
00:12:06,664 --> 00:12:42,870
Rafi Mendelsohn: So we might be seeing something on social media and we think it's real and we engage in it, we share it, we like it, we post it, and actually it is a real, either because we, you know, have the best of intentions, or maybe we don't have the best of intentions, but either way, it's kind of that unintentional spreading. And so there is a, we see both happening, but in particular, what we are looking at when we are talking about AI, when we are talking about bots, when we're talking about content, is where we are. Cyber is predominantly focused on the malicious actors who are the main proponents of disinformation.
20
00:12:43,660 --> 00:13:40,646
Daniel Nestle: I think a lot of the problems that we're seeing with trust can be traced in some ways. I mean, it's hard to say where it can be traced to because it's been a decline for a long time. But the conflation of misinformation and disinformation when one side or the other doesn't like something, doesn't like a narrative. They're quick to say it's just misinformation. And here, and of course, here in the US, you don't hear the word disinformation thrown around too often, because that's basically straight out accusing someone of, as you said, being malicious, of lying, of just distributing with the intent to deceive. So we see misinformation, misinformation. And I think the word itself has lost power over misuse and has become a kind of disinformation of itself.
21
00:13:40,798 --> 00:14:44,090
Daniel Nestle: We get kind of meta, the rank use of misinformation is a tool that is employed often across different institutions, to shut down argument or to just kind of shift narratives into a direction that people think is the right way to go. You know, in marketing and comms, we're dealing with stakeholder communications, audience communications, all the time. And it's very, there's always a good chance that some, one of our audiences or some of our people, or whether it's our employees or our CEO or somebody will have a different view of what information is real and what information is not real. What's, you know, I'm not talking about necessarily business, you know, business intelligence or analytics like that. I mean, of course, the big societal and I guess, larger scale macro stories that were seeing that are being manipulated out there primarily on social media.
22
00:14:46,190 --> 00:15:05,330
Daniel Nestle: So what is it that youre doing or that Syabra is doing to, I guess, put some fact behind those terms? Misinformation, disinformation, like being dispassionate about helping people understand what is real and what isn't.
23
00:15:06,430 --> 00:15:49,220
Rafi Mendelsohn: In some ways, this challenge is in no way a new one. For hundreds, for thousands of years, the flow of communication and the way that people interact with each other and absorb, then use. In some ways, there's been a challenge for thousands of years, right? The way that some people are receiving some information from certain authorities over others, and the way that information is used to influence people and help them form their opinions. In many ways, this is brand new, the challenge that we find ourselves in 2024, because the interconnectedness of the way that we live our lives online and on social media, it's really today's town square.
24
00:15:49,860 --> 00:16:30,754
Rafi Mendelsohn: But whereas you had the town square and you could see with your own eyes, you might not have many options and variety of ways to get your news, whether it's the local newspaper or the town crier, but at least you could see them. And nowadays, when we are engaging increasingly on social media and online generally. The challenge as a society that we often have is a challenge of we don't even know what we don't even know. And when we're engaging with people, and actually, those people aren't even real. So imagine we're in a room full of people. There's 100 people in the room, and there are some people in one corner who have one opinion, some people in another corner who have another opinion.
25
00:16:30,802 --> 00:17:11,616
Rafi Mendelsohn: There's another group in another area that are really polite and nice and care about the facts, and there's another group, a circle, that are just shouting mean things. And maybe it's not true, and there's a really important debate that we're having at the moment about the decorum and the information being shared in that room of 100 people. But what sometimes is missed, I find, is that if 25 of the 100 people in that room were, in fact, robots, and they were either shouting good things or bad things, most of the time, it's bad things or it's truthful things, or it's incorrect things, most of the time, it's incorrect things. We are still absorbing the information from robots. And if we knew that 25.
26
00:17:11,688 --> 00:17:55,160
Rafi Mendelsohn: I mean, just taking as an example, but sometimes the proportion of conversations that we see on social media is as high as that. If we knew that they were robots, we would approach that conversation differently. But it's really hard. And to then bring in Genai, the use of those tools. JNA is fantastic. And most of your conversations have been how, Dan, how you can use it for good and positive, but again, to be the downer in the party, we're often looking at it, how these kinds of tools can be used by malicious actors. It's becoming harder for us to understand, okay, who are the robots in the room, and who should we listen to, and who should we not listen to? And it's becoming much harder to be able to distinguish that.
27
00:17:55,200 --> 00:18:04,320
Rafi Mendelsohn: And so that's some of the uniqueness that we find ourselves in 2024, compared to the same challenges in some ways, that we've been having for thousands of years.
28
00:18:04,440 --> 00:19:12,840
Daniel Nestle: Yeah. What have you seen to be, broadly speaking? And then you can give examples of the effect of bots, essentially, across different platforms. First of all, you mentioned, you said 25 out of 125%. I mean, that's a large number of bots. And we know anybody who's followed the Elon saga at Twixter is been, you know, has. Has heard about his pursuit of bots, and probably the failure to get rid of them. And, you know, anybody who's moved to a different platform will see a proliferation of bots as well. And especially if you start to engage in any topic that's remotely polarizing or controversial or even highly topical, you'll see bots come out. So you said 25%. What has been broadly your experience with what's out there and how do you identify?
29
00:19:13,380 --> 00:19:44,250
Rafi Mendelsohn: Sure. So I would say from our research, the average conversation across social media has around four to six, five to 7%. Let's say four to 7% of an average conversation has. Contains bots. Contains fake accounts. And when we say fake accounts, inauthentic profiles, we mean bots, sock puppets, trolls, fake accounts. And we need to pay attention to those fake accounts because they've been created with malicious intent. And so four to 7% is kind of the average give or take.
30
00:19:44,550 --> 00:20:34,072
Rafi Mendelsohn: And so when we start to see certain conversations, whether it's political, whether it's around wartime and around brands, and those numbers are significantly higher, that's when we need to take a pause and understand exactly what's going on here, because there is probably something brewing or actually something very bad has already happened or is attempting to happen in terms of manipulating, influencing the opinions of others. And so that kind of gives us cause of concern. But, of course, fake accounts is only one side of the equation. When Elon Musk was looking to acquire Twitter, the key debate was the sheer, just the raw number. What's that number? Is it 5% as one, we're saying, is it 20% as another saying? Full disclosure for all the listeners out there at the time that Elon Musk was looking to acquire Twitter, he actually hired syhabra.
31
00:20:34,176 --> 00:21:18,930
Rafi Mendelsohn: So were brought on to run analysis. We did a few reports, including the data that he had received from Twitter itself as part of that acquisition process. And he provided us with that data, and we ran it through our. Through our technology, through our algorithms, and came up with around 11.7% at the time. And since the acquisition has gone through, we have been spectators like everybody else. So just to get that out of the way and kind of disclosure of what our involvement and our experience has been in that. But, of course, even at that time, it was about, is it 5%, is it 20%? But that's kind of ignoring the impact of the accounts, and they have become more sophisticated just looking at the number of bots, the number of fake accounts.
32
00:21:19,310 --> 00:22:06,910
Rafi Mendelsohn: Some accounts just amplify and some post, but some are created in order to post original content, maybe a few hundred times a day. So you might have five, you might have 25. But the proportion of the conversation, the amount of content that they are dominating in terms of that wider conversation, might be much and often is much higher. So even when we understand how many fake accounts are involved in the conversation, it's really important not just understanding, okay, what the narratives are. What's the sentiment? What's the snowball? Is it going up or down? Who's talking about it? What's the real from the fake? But also, what are the narratives that the fake accounts are putting out compared to the real? And what, crucially, what impact are the fake accounts having on the real conversation?
33
00:22:07,030 --> 00:22:38,820
Rafi Mendelsohn: Because, of course, not all campaigns are successful and some are more successful than others. Some fake accounts are more successful than others. So you really need to understand here what the impact is, which is, of course, a much bigger question or a much bigger analysis investigation than just, okay, I just want that sheer number. That number is often an interesting and helpful entry point in terms of the conversation, particularly for brands, to help them understand what they're seeing, not knowing what they don't know. Challenge. But it needs to be seen in totality.
34
00:22:39,600 --> 00:23:29,236
Daniel Nestle: So we talked about Twitter, but I definitely have seen bot activity and malicious activity on Instagram and on TikTok. I mean, I'm no longer on TikTok. I proudly say I'm no longer on TikTok. But full disclosure, that was at the kind of suggestion of my psychiatrist rather than my own personal kind of politics or anything like this, although I'm very glad that I'm nothing, not on the platform anymore. But we're seeing, I think on all the platforms where there is significant interaction, especially political interaction, there's a proliferation of bots. I haven't seen so much on LinkedIn, though, not to say that there's not false accounts, because there certainly are, and I get hit with them, but they're so easy to spot, or at least I think they're easy to spot. Being on the platform for almost.
35
00:23:29,308 --> 00:24:07,280
Daniel Nestle: For 20 years, or more than 20 years now, I can generally tell. But that's just. No, there's no. I'm not using an algorithm to tell. I'm not using any kind of technical skill. I'm just eyeballing it and saying, yes, this is fake. Most people will not be able to do this. So, where are they? Like, where do you see them? Where do you see the bots? And we're talking about bots, but we can, and we can, you know, slide into other malicious things. But while we're on this topic, you know, where do you see them and where do you think the problems continue? And if you know, if you don't mind saying, where are they coming from?
36
00:24:08,740 --> 00:24:38,246
Rafi Mendelsohn: That's the best. I knew that would be a tough question. That really is. That really is. So I'm just going to carefully slide past that question and answer the first part of your question. Sure, if you can come back to that later. But in terms of your question of where are they again, to pull that back from, to look at the perspective coming from malicious actors and how do they think sometimes the debate is around, okay, this is all on x, or it kind of goes through cycles of which platforms are the ones that people are focusing on at that time.
37
00:24:38,318 --> 00:25:31,228
Rafi Mendelsohn: A lot of people looking at TikTok now, but if you're a malicious actor, whether you're a state actor or whether you are an individual loan actor, or if you're organized, if it's for financial benefit, whatever the motivation is, then you are not just going to be looking at one platform. The way you approach it, as they said in the defense, whether ttps, the tactics, techniques and procedures might change, you might approach it in a slightly different way. But why would you take one platform that has maybe a few hundred million active users compared to the rest that have even more? So, definitely it's across all of platforms. It's a really big challenge, right? It involves, the answer to this involves platforms. Us as a society, regulators, it's all stakeholders. It's a really tough challenge because they're really smart.
38
00:25:31,284 --> 00:26:16,768
Rafi Mendelsohn: What they do in terms of those who are trying to influence and disrupt social discourse. So it's across all the platforms. But then the other thing to consider, and I think this is something that we are seeing alarmingly more over the last six to twelve months, is the way that fake accounts, for a variety of reasons, are also inserting themselves or people are coming across them, not just when it comes to going to the polls and when it comes to election time, but actually in most aspects of our lives. And we think we are shielded from it, depending on the conversation, depending on our interests. If you might be interested more in sports or you might be interested in whatever it is, but actually seeing the way that fake accounts are inserting themselves in every type of conversation is particularly worrying.
39
00:26:16,824 --> 00:26:23,736
Rafi Mendelsohn: And so this is no longer just the domain of one platform or the domain of an election cycle.
40
00:26:23,888 --> 00:26:51,480
Daniel Nestle: When you say the way that they're inserting themselves, is that just, is that referring to the sophistication with which they are analyzing or looking at, you know, topics and conversations, is it the way that they portray themselves or the way that they can sort of deceptively sneak into conversations? What's changed about the way that they are inserting themselves?
41
00:26:51,980 --> 00:27:43,616
Rafi Mendelsohn: Wow. So now we get to really geekily get under the kind of get into the specific details, and we look forward to this. So, yeah, so the way that the types of conversations are engaging themselves, for example, let's just take brands. We're finding that they are being impacted by brand disinformation. I would say there's a few, but it's two predominant ways. The first one is when a company has done something or is perceived to have done something. If they've taken a certain decision as a company or perceived to have taken decisions, whether that's around societal issues, DEI, trans rights, LGBTQ, or whether it's around wars and taking sides. And even if the company hasn't actually done that, if there is a perception that they've done that, so it's something that the company has actively done.
42
00:27:43,648 --> 00:28:31,728
Rafi Mendelsohn: And then we see real people involved and in quite an emotional, aggressive, angry way, and then we see fake accounts piling in and driving certain conversations, certain hashtags, in a way that really escalates that. And sometimes that conversation can be proliferated by fake accounts to the point that journalists and see it, are reporting on it. They themselves don't know the difference between the good and the bad and the fake, or that actually conversations online materialize physically, whether it's protests outside offices or in shops, whatever it is. And so that's around the kind of geopolitical or a certain political society issue that the company has taken. The second type is actually when the company has. It's not related to the company at all, and they find themselves just on the receiving end.
43
00:28:31,824 --> 00:29:14,964
Rafi Mendelsohn: So kind of an example we found at Sciabra, our analysts found Ripco, who a few weeks ago, the surfing brand retailer, and we found that they had a campaign where they introduced a new actor or a model for their. For their campaign. Not everyone was happy. And so in a very light way, on a very low level, the brand was trending in a negative way. Now, if you're a large enough brand, you might find yourself trending in various countries at various times. You might be for half an hour, it might be for an hour. Nothing really to cause concern. You want to track it if you're in communication, social media, the marketing team, and you want to be on top of it.
44
00:29:15,092 --> 00:30:07,722
Rafi Mendelsohn: But with Ripco, what we saw when people, real people for two or three days were using the hashtag hash boycottripco, and it was trending very lightly on low level in some countries. And then all of a sudden, the following five to seven days, we saw that hashtag just absolutely go crazy. And it was escalating. A 400 something percent increase of people using it found itself trending massively around the world. And when we analyzed that period of time where all of a sudden the hashtag skyrocketed, we actually saw that it was responsible. The accounts responsible for that were fake accounts, but they were actually fake accounts that had been created in advance of the indian general election. So in those stages, bot networks were being created for the. In preparation for the indian general election.
45
00:30:07,866 --> 00:30:55,548
Rafi Mendelsohn: But one of the techniques for bot networks to kind of appear real, authentic, and just like you or I, is to engage in everyday conversations, whether that's actually engaging, giving an opinion of, I saw this movie, it was great, and everyone should go watch it, or I love my soccer team, or whether it's just pumping out their content and using those hashtags. And it just happened to be that Rip girl found themselves in the wrong place at the wrong time. That hash boycottripkov suddenly found itself exploding. Lots of journalists started covering it. It started to get attention from lots of people, lots of real people on social media. And for the team there, Ripcord, just for disclaimer, is not a client of ours. But for the team there, it must have been incredibly confusing. They're seeing this negative hashtag, negative content going viral.
46
00:30:55,684 --> 00:31:34,088
Rafi Mendelsohn: And to look at the accounts, why? What's happening? There must have been a lot of questions. And unless you're able to kind of fully understand separating from the real from the fake and also understanding why they're coming from, where they're coming from, what are the similarities? Is it a coordinated campaign? And so that's another new area where actually companies who think that they kind of have their crisis simulations all planned out, and then all of a sudden, out of nowhere, it could be unrelated to anything that the company's done. And that's really scary. So that's just in the brand domain. And those are just two of the areas where brands can find themselves in hot water, all being targeted in ways that were pretty new.
47
00:31:34,224 --> 00:32:26,530
Daniel Nestle: You've just brought up something that's mind blowing. I think, for a lot of us out there, when you represent a brand or you work for a company, you're in house, or even on the agency side where you have a client, you want to figure out how to defend, how to protect and defend the reputation. Reputation is always forthright in the communications world. So you go through and you figure out what topics you should be talking about. You figure out what you should be avoiding. You figure out what aligns with your values and your mission and your purpose. The typical toolkit. I would say that anyone who's experienced and been around coms and marketing for a while just understands that you can't just tell a story. It has to be aligned and connect in so many different ways so that it's relevant and resonant.
48
00:32:27,110 --> 00:33:20,908
Daniel Nestle: Now, there are countless events and countless things around the world that have nothing to do with any of this. So why would my company, for example, I'm working at lixel. It's a kitchen and batheous and water technology company. My company has absolutely nothing to do with almost any political issue whatsoever, unless it comes to advocating for clean sanitation, safe sanitation, that kind of thing, right? Very safe. I mean, very. I don't think there's anybody out there who would say that's a controversial position to take. So, you know, in some ways, as a communicator, it's a breath of fresh air, because I don't really, I feel like, okay, I don't have to worry too much. But now what youre saying is that related to a completely unrelated issue. So we know that the elections in the US are coming up.
49
00:33:21,044 --> 00:34:15,580
Daniel Nestle: We know that there are the mexican elections, just happened as of the time of this recording. We know that the indian elections have gone on, south african elections, all these things that have nothing to do with us. But what youre saying is that bot networks are being created to influence or to play a role in those particular events. But in order for those bots to have legitimacy, they have to build their own profiles to look like they're actually real. So to break it down to a very practical and anecdotal kind of example that I see almost all the time is when I'm looking at comments, and gosh, God knows I shouldn't be looking at comments, it's very unhealthy.
50
00:34:15,620 --> 00:35:10,650
Daniel Nestle: But when you look at comments for anything, you'll see so and so XY 32723, some kind of crazy number, which is usually a giveaway, but I'm sure that they're veering away from that now. But you click through and you see, okay, they got one follower, no posts or one post. And the post is just ridiculous. You know, it's a bot. But now if I had checked that out, like, let's say six or ten weeks later, this bot happened to survive. And it has then a history of random or different kinds of posts that's sort of all around a certain area. I would be at fast, at a first glance, it would be much harder to tell that this is fake. And there's a brilliance, a malevolent brilliance about this. So when that happens at scale, clearly that's even more problematic than it was before.
51
00:35:12,150 --> 00:35:17,678
Daniel Nestle: So what are we doing about it then? I mean, how can we, as communicators, what should we be doing?
52
00:35:17,774 --> 00:36:01,400
Rafi Mendelsohn: And even more so if something does happen, right, if it's impacting you, and even more so if you're a publicly traded company, that might be the motivation for malicious actors to target you. It might be nothing you've done, but actually just purely by virtue of the fact that you are publicly traded company and they might try and impact your stock price. But I suppose linking it back to the earlier conversation in terms of asking communicators and the role that we play, if you are a company who's experiencing this or there is a certain amount of conversation, let's say it's not even at full crisis mode, it's kind of an issues, and there is a conversations or people starting to notice and the CMO or the CEO of the company or the board.
53
00:36:01,900 --> 00:36:41,622
Rafi Mendelsohn: And actually, in Ripka's case, we saw, we can't say it's 100% connected to that, but it probably contributed the share price drop. That's going to get very quickly, the attention of the board and of the CEO. And they're going to turn around probably to two groups. They're going to turn around to the security and cyber team and say what's going on. And from experience, when we've seen these things happen, the security teams and cyber teams, threat intelligence, are turning around to the CEO and saying, we don't monitor social media. That's not our domain. We don't have the. It's the marketing team that have the passwords, right? They're the one creating the content. They're the ones spending money on these things, on these platforms. We don't touch any of that.
54
00:36:41,686 --> 00:37:25,038
Rafi Mendelsohn: And then you go to the social media teams, you might have some crisis communications people, and they might be ready with their handbooks and their playbooks and their simulations that they've done. And some of those might account for the situation scenarios that they're finding themselves in. But generally speaking, us as communicators, we're kind of thinking of, well, we improved our sentiment and we had a 12% increase of mentions from last year or last month. Look at what a great job we've done. And hopefully you will have all done a great job, but the CEO is going to turn around and say, I don't really care about that right now because our share price is tanking, because some fake accounts are making our brand hash, boycott our brand, going viral. So what exactly are you doing about it?
55
00:37:25,094 --> 00:38:06,548
Rafi Mendelsohn: So there's this kind of disconnect where we have this gap in the middle of this kind of defensive protection mindset of security who aren't looking at social and the rest of us looking at social and looking at communications or looking at where these things can snowball out of. But we're not thinking necessarily from a defensive point of view or having the tools and alerting systems to be able to do that. And even if you do observe it, to be able to know, okay, what's the real from the fake? You know, if you were Ripka, once they start to get questions from journalists, they can actually turn around rather than saying, oh, yeah, we're really sorry for the initial thing that we did, the kind of old classic playbook, we're sorry. We put out a statement, if they wanted to.
56
00:38:06,684 --> 00:38:52,470
Rafi Mendelsohn: One of the tools that the communications teams could now do is say, actually, we're experiencing a bot attack, we're under attack, just like a cybersecurity attack, we're under attack. And they could present the data and say, well, actually, 30 something percent of all of the conversations around us are driven by fake accounts, and this is where they're coming from. That's a great opportunity to completely change the narrative and firstly to gain transparency of exactly what's happening, but also to change that narrative. And that will obviously reflect in the way that your share prices is thought about as well. So there's a lot at play here, and there's a lot that needs to be done. And I think as an industry, we are now thinking about that and who owns that in the middle, that kind of defensive communications, that brand disinformation.
57
00:38:52,630 --> 00:39:06,840
Rafi Mendelsohn: Of course, we've kind of barely spoken about Genai, but Genai is interspersed with all of that because of the tools and the ability for malicious actors to be able to make their campaigns even more effective.
58
00:39:07,860 --> 00:39:32,896
Daniel Nestle: Maybe we can weave that into this next part because we're talking about noticing or understanding that we're under attack, where these are bots or bot networks and whatnot, and there's some bad actor behind this. Nice way of pivoting away from where they're coming from. Appreciate that as a comms professional. Kudos to you, but we know they're coming from all over the place, all over the world. Right.
59
00:39:33,008 --> 00:40:20,104
Rafi Mendelsohn: I'm happy to answer that, by the way, because I think the analogy there, I think the useful analogy think about is when people say who commits crime is being committed. There's lots of different types of people that commit a crime. State actors commit crime, organize crime, individual loan opportunists. So where does it come from? It comes from a multitude of places, each with a slightly different motivation. The motivation itself probably hasn't changed for thousands of years. It's power, influence, and money. It's always been that. And so you might have malicious actors who are operating, looking to make a quick buck off of a share price change. They're looking to influence opinion over elections, or they're just looking to cause chaos. And that could be all happening, in fact, that could all be happening around the same conversation, around the same topic.
60
00:40:20,192 --> 00:40:34,760
Rafi Mendelsohn: And here we are on our newsfeed, trying to engage in it. There's real and there's fake, but there's also different motivations and different reasons of we're seeing all the fake accounts, not necessarily being able to distinguish it. So hopefully that answers somewhat your question without dodging it.
61
00:40:34,800 --> 00:41:23,342
Daniel Nestle: Oh, for sure. I mean, well, I mean, look, I think anybody who's out there kind of knows a lot about where, you know, some of the specifics about who and what and where. But I think it's broader than what people think. Like, you know, it's not just this particular country or, you know, these, this, these particular people. And it's a, it's like what you just said there about, there's organized crime, there's financial crime, there's different kinds of criminals, therefore, there's different kinds of actors here. And the fact that the cost is fairly low means there's a low barrier to entry to make this happen. Right. But what I wanted to get to. So we've been talking about, okay, there's malicious actors and there's bot farms, and AI is just augmenting this. AI is just allowing content to be created at even greater than scale.
62
00:41:23,486 --> 00:42:12,100
Daniel Nestle: And for bots to sound even more ironically human because they're using the AI to just generate a lot of content. And certainly, you know, when it's in, especially if it's in languages that aren't native to the operators or to whoever else is out there. Like, I don't know, the technology of, or the operational side of a bot farm, and you probably do. But put that aside. I know, once you understand that there's bad information out there's, it's straight up disinformation. How do you, what do you do? Like, what is the next step? Because a typical, my last episode I just recorded recently and just posted recently, we talked about crisis comms. And generally speaking, it's about like, okay, when the crisis happens, you're reactive.
63
00:42:12,560 --> 00:42:43,214
Daniel Nestle: And there's playbook, which you should probably, you know, if it's not written in the last couple of weeks, you should just throw it away, throw it out the window, because everything changes so quickly and so rapidly. And we talked primarily about, you know, what to do when you're in the crisis. So how do you deal then with this? What should companies do? Is there, what steps should they take or communicators or even individuals who are being attacked? So what is the next step?
64
00:42:43,262 --> 00:43:32,528
Daniel Nestle: And let me add one little kind of wrinkle to that, because we mentioned that Syhbras Israeli I have made no secret about my connection to being jewish and to sort of watching the events unfold out there since October 7, one of the things that I've noticed, and this isn't just specifically related to that, but I've noticed it because of this, is this whole concept of shadow banning, right? So you have people who are legitimately trying to share good information who are getting banned or kind of, you know, shadow banned. In other words, their posts are going out there, but they're being dramatically squashed or turned down so that they're not reaching the right number of people.
65
00:43:32,584 --> 00:43:57,322
Daniel Nestle: So if their response requires, you know, disseminating good information to combat bad information, what do you do when sometimes the good information is being kind of tamped down or downgraded by the algorithms. So there's a couple of elements here.
66
00:43:57,346 --> 00:43:57,530
Rafi Mendelsohn: Right.
67
00:43:57,570 --> 00:44:08,030
Daniel Nestle: Number one, what do you do about these attacks? And number two, what do you do when the good information is, in turn being tamped down by the platforms themselves?
68
00:44:08,850 --> 00:44:50,532
Rafi Mendelsohn: Sure. Well, social media is still for communicators. There's still only one channel. It's only one opportunity to get our message out there. So much of the playbooks will stay the same. I'm not, certainly not suggesting that the old communications playbooks that we had need to be rewritten and chopped up and burnt. That's definitely not the case. I suppose it's understanding what we are seeing and having that ct scan of exactly what the conversation is more than, and not just what the conversation is, but also who is involved in it. And I think so far, we're very focused on what's being said and not who's saying it, which is a kind of keen component.
69
00:44:50,556 --> 00:45:32,134
Rafi Mendelsohn: You might not even have to update the crisis communications handbook, but principle right at the beginning is you want to understand what's being said and who's saying it and the nuance and the difference, and your approach is going to wildly change depending on what you are seeing out there and so on. Social, the ability to understand. Okay, let's differentiate between the fake accounts. Let's see what the narratives are that they're putting out there. We do see some brands who, when they're experiencing this kind of negativity, they take it upon themselves to respond to every single account as a kind of communications and customer support. Let's respond to everything. I'm really sorry you've had a bad experience. Let's speak about it.
70
00:45:32,222 --> 00:46:15,388
Rafi Mendelsohn: And then sometimes we come along and say, well, actually, a huge chunk of all of those accounts that you're engaging with are fake accounts. You're essentially speaking to robots, and so you shouldn't be doing that because that's not going to have any impact at all. You do need to address them, but the way you do that is completely different. You're not reaching out to them, but you want to get them taken down. You want to get them addressed. Even then, you might have tens of thousands of fake accounts, but maybe there are 200 fake accounts that are the kind of super spreaders of. Of the negativity and of the disinformation. And so, therefore, you want to address the quality of the negativity coming towards you rather than just the quantity.
71
00:46:15,564 --> 00:46:58,578
Rafi Mendelsohn: And so, again, that's a slightly sharpening of the tactic that you are going to use in order to be able to overcome the challenge that you're facing. And so, therefore, going to platforms, and again, not saying they're being mean to us, or maybe it's incorrect, actually, they're saying it's a fake account, and by virtue of the fact that it's a fake account, they are breaching the platform's terms of service. And so, therefore, you can see, actually, and again, that might be the same approach you've taken before, but now you are able to put it into a different perspective in a way that sharpens your tools to be able to respond to such activities. On the opposite side, you also want to understand who are real and who's positive about you.
72
00:46:58,604 --> 00:47:37,906
Rafi Mendelsohn: You, because maybe you see a conversation that's trending or an issue that comes up, but maybe not everyone feels negatively about you, but you want to identify who is real and who is being positive so that you can amplify those voices, as well as just purely understanding what the layer of the land is. What is the TCT scan say? So that you can then communicate that internally. You can then respond appropriately, but also you can communicate that externally when it comes to journalists giving you a call and saying, hey, we're seeing all this negativity, and say, actually, the five examples you've just sent me in an email are three of them are, in fact, fake, and that completely changes that. So those are just some of the tactics. But of course, it depends what's happening.
73
00:47:37,938 --> 00:48:22,374
Rafi Mendelsohn: It depends the nature of it, which might not be so different. Maybe the company has done something, kind of an old classic, they've screwed up or they've tried to cover things, which is often, always worse, actually. But still, knowing what that conversation is, knowing what's happening, especially if there are people out there trying to manipulate the conversation, they have the motivation and the intent and the tools. And like you said, it doesn't cost much to be able to conduct these kinds of things, to be able to do that. So therefore, don't ignore them. I suppose the last thing I would say on that particular topic is you shouldn't ignore them, but at the same time, malicious actors. I think the analogy with cybersecurity really works.
74
00:48:22,502 --> 00:49:03,130
Rafi Mendelsohn: Cybersecurity, the way that people in that world think about it, is you don't necessarily have to have the most completely protected walls around your company, but your walls just have to be higher than the next company over. So that if a hacker is coming along and they're trying to target you as a company, and it's proving difficult to do so they're just going to move on to the next company because it's easier to do that. And I think that's the same here as communications teams and as companies brands generally. If they have those systems in place, those alerting systems, the ability to be able to differentiate and then approach it, we do see that these people are very opportunistic. They might be creating fake accounts to mimic your customer support.
75
00:49:03,250 --> 00:49:26,560
Rafi Mendelsohn: And if they're being reported and take it down, we see those accounts being switched to other companies. They change the images, they change the content in order to try and direct themselves and target the next company along. So that's also just to be aware, just to be vigilant and to be taking those initial steps might be enough in that kind of cybersecurity psyche might be enough to ward off that next attack.
76
00:49:27,020 --> 00:50:35,802
Daniel Nestle: Trey, it's interesting because as you're talking about some of these elements of what these attacks look like and the bots and the bot networks are like trying to think of, and I was thinking about it from a cybersecurity standpoint, what could you do to defend? It's not like you could just install Norton or McAfee and okay, there's no more bots, there's no more AI bothering you. It's really more like cancer, isn't it? Test review, like you said to CAT scan or CT scan, and early detection is the key to survival, right? So obviously Sybra is your platform does that in day to day activities of your typical comms media team or their social monitoring team. What are the trends that, well, I suppose you already said that look for those upticks for no reason, for negative activity, but are there any prophylactic measures?
77
00:50:35,906 --> 00:50:51,790
Daniel Nestle: Is there anything that people and brands can do at this point? In addition to investing in a listening service or platform, is there anything that your just standard run the mill person can do to protect themselves?
78
00:50:52,820 --> 00:51:34,536
Rafi Mendelsohn: Yes, I think that kind of monitoring is foundational and having that right kind of monitoring and however you do it, and also as well, just that psyche, that mindset of, okay, we want to increase and improve our mentions and improve our sentiment, etcetera, but we also want to be on the lookout for the next thing. The best companies that are doing this are also thinking, they're kind of acting like countries, not companies. And I know that sounds like that's the exclusive opportunity for big companies to be able to do that, right? I wish I was a big enough company to be able to start thinking about as a country. But actually we've seen that amongst medium sized companies as well, thinking like countries are not just like companies.
79
00:51:34,728 --> 00:52:10,546
Rafi Mendelsohn: And look, thinking of what that next exposure is, what the risk is, what's the topic that could impact you. And so that's also kind of a mindset. But like any good crisis communications person will say, you don't have to respond to everything. So once you have that mindset and have the tools and have the ability to set it up and have the alerting system, like you said, if there's a spike of the conversation, but also if there's a spike of fake accounts that are discussing you as well, you need to suddenly pay attention to that. But you don't always have to respond to everything and responding to some things, monitoring other things, but also ignoring other things as well. That's incredibly important.
80
00:52:10,618 --> 00:52:23,590
Rafi Mendelsohn: If you see something that could be an issue, but actually it's not causing a threat, it's not having a wider impact on the wider conversation yet, and so therefore you don't need to do anything and that can be empowering as well.
81
00:52:24,410 --> 00:52:46,720
Daniel Nestle: It also occurs to me that this could be a really good use case for defensive gen AI usage, like for people who are in my role or in communications roles, where you use the Genai tools available to you to game out these scenarios and prepare responsive statements, et cetera, like you normally would with any scenario planning.
82
00:52:48,660 --> 00:53:52,526
Daniel Nestle: I have to say that one of the, I guess most eye opening or exciting things about Genai that I've seen is this ability to use it as a kind of wall with which, from off of which to bounce ideas or using the proper prompting and the right amount, the right kind of base knowledge to transform your AI intern, I call it an intern, but to put your genai platform to test as a scenario planner and as a co creator of different kinds of, I guess, mitigation tactics, it's a good use case there. Let me ask you, I mean, we're starting to kind of get to it, I think, or towards the end of our conversation because there's so far we can go and there's still, I mean, so much to unpack here. But you know, I don't think it's getting any better, right?
83
00:53:52,718 --> 00:54:52,228
Daniel Nestle: I certainly don't. And I think a lot of brands are feeling the same thing. And although maybe not with bot farms and cyber attacks and social media attacks in mind, theyre just thinking in general, were not getting any benefit from putting ourselves out there around social issues, around political issues for sure. And theres a vast pullback thats been happening. I mean its measured. I don't have the receipts in front of me right now, but we certainly have a lot of evidence that brands are not diving into the waters of social causes as they once were. Do you think that this is going to protect them at all from exposure to these malicious actors or what's the next tactic? You think that might happen? Or does cyber think is going to happen?
84
00:54:52,324 --> 00:55:34,782
Rafi Mendelsohn: I definitely agree that we've seen a pullback from brands taking specific opinions, their risk appetite, both on a marketing communications point of view, but also from a commercial point of view, the kind of deals they might be looking at, the kind of partnerships they might be, it's kind of all through this lens of risk. But I think that has resulted in somewhat of a bunker mentality that there is a note. We're going to keep our heads down. We don't think it's going to impact us. And this is some of the feedback that we're hearing from when we speak to marketing teams, that we see some of the comments on our social media accounts. But generally speaking, we're not looking outside of the walls of our own social media accounts.
85
00:55:34,806 --> 00:56:20,580
Rafi Mendelsohn: We're not looking at how people engaging with our brand or the wider topics around our brand. And it's kind of a, let's keep our head down and let's hope it will pass. Maybe it's because it's an election year, maybe it's because someone has an opinion or we have exposure. We had some commercial things in Russia or whatever it is. Unfortunately, if were looking at this challenge through the lens of this is only social issues and you have people who are for certain topics against certain topics, then I probably would concur. I probably would agree that actually you just need to keep your head down. And companies, maybe we're going to see a different era where actually companies aren't people and they are behaving in the way that they used to for good or bad.
86
00:56:21,040 --> 00:57:17,300
Rafi Mendelsohn: But actually, that's not really the motivational, that's not really what we're seeing. When the example that we discussed earlier, when we see brands being impacted either because of the share price or because of something completely unrelated, then that bunker mentality doesn't really work. The, the hiding and hoping and wishing it will pass and let's not hope it doesn't affect us, isn't really going to cut the mustard. And so there is that need. And I think on a weekly basis we're uncovering examples of brands where we see malicious actors weaponizing social media by creating fake accounts to in some way or another manipulate the conversation, impact your brand. And unfortunately, it's not going away. Like you said, it is increasing, right. Is increasing the tools, the general tools at the hands of malicious actors, their ability to create very believable content, even just text.
87
00:57:17,800 --> 00:57:48,946
Rafi Mendelsohn: English isn't your first language. I think the kind of crypto and bitcoin bots of the past that we saw where it was fairly easy to understand that because there was something a little bit off with the grammar or the English and that doesn't really exist anymore because a very basic level, you can use genai tools. We haven't even got to deepfakes and all the deepfakes, but just on a very basic level, to be able to do that and have bots even speak to each other and look more believable and look more authentic but it isn't really going to go away. And I think that's the kind of mentality that we're going to be challenged on.
88
00:57:49,018 --> 00:58:40,300
Rafi Mendelsohn: If we, as a profession want to have that seat at the table, then I think leading, there's a real opportunity to lead on this topic and say, okay, we understand the platforms. We understand as much as one can the algorithms. And because this is what we're in on a daily basis, we understand how to craft messaging and communications and work with journalists. Okay, now we need to add in the equation of how this is impacting general society, how it's impacting the world, no longer around just elections, and what we can do to be able to mitigate it and have something to say. And I think that kind of credibility for us as an industry, as marketers, as communicators, we really benefit from that because we'll be able to lead as opposed to being impacted once the doors of the bunker are blown off.
89
00:58:40,460 --> 00:59:23,910
Daniel Nestle: Trey, it's about understanding the landscape around us and being able to counsel and advise our clients and our executives and our employees in the right direction. Obviously, knowledge is power. If you're coming from a position of knowledge, you will do the right thing. You will be able to, I don't know if you'll necessarily do the right thing, but you will definitely be positioned to add value to your organization. One thing that kind of occurred to me as well is that you mentioned deepfakes. And I know that we didn't get to it, and. And that's a whole different world there. And I think people are more aware of the fact that there's deepfakes out there.
90
00:59:23,990 --> 01:00:33,622
Daniel Nestle: And to me, it's not so much about the detection, although it should be, but it's really all about a reaction to say, well, I don't know what I can trust at all anymore. So it's like a giant psyop that's happening and actually is happening in certain sectors, in certain places, as we've seen. But as far as Genai being this tool to create text, it's also in the wrong hands. It generates code, generates images. It generates everything that one would need to build a profile that would otherwise pass in a cursory or even somewhat middle depth fashion as a real person. That's very troublesome to me. But, Rafi, we've talked a lot about these very concerning things. Last thing I'm going to ask you, which I generally ask most folks on the show, is, well, I usually ask, what's keeping you up at night.
91
01:00:33,646 --> 01:00:59,180
Daniel Nestle: But I would imagine that your day job is fairly all about that. But what do you see coming down the pike? What is, you think, the sort of immediate or short midterm future for communicators as far as what they're going to have to cope with in this environment and what they should be doing, or if there's another trend that you're seeing that you want to highlight, let's be a little predictive here. What's on your mind?
92
01:00:59,880 --> 01:01:41,754
Rafi Mendelsohn: Sure. So I'm going to answer it from the perspective of where I come from as a communicator. Right. From representing the industry. I think what isn't new is the conversation we've always been having of, okay, having that seat at the table and wanting that buy in and people seeing us as having the relevant opinion. And I think what is new is the answers, the questions that we're being asked with regards to new technology, with regards to new threats, with regards to, you could be a midsize company, and people are asking you about how is Russia going to affect us? We're no longer kind of insulated those things.
93
01:01:41,842 --> 01:02:19,398
Rafi Mendelsohn: And some of those might actually not be so new, but the expectation that we are going to be able to craft and really guide and advise in such stormy weathers where it's so difficult to, like we said at the beginning, differentiate from the good bad fake, I think is going to make it really challenging. And I think what's going to be interesting over the next six months is who is going to take that voice? I think over the last few months, there's been a certain amount, you know, Jenna, we haven't even, I've spoken about purely about the negative aspect of it, but of course, a lot of it is positive, and we're all using on a day to day basis.
94
01:02:19,494 --> 01:03:03,126
Rafi Mendelsohn: I think it's fantastic, what we're seeing here, and I'm encouraging my own children to be as proficient and engaging themselves in it as much as possible. It's a great opportunity. But also, in terms of the challenges coming down the line, I think we've seen that there's been lots of tools out there that have promised to be the answer, and they're not. And maybe we've kind of passed that tipping point where, okay, it's not going to be the thing. People aren't going to lose their jobs in the millions, and whole industries aren't going to be burned down, generally speaking, my personal opinion, but at the same time, this technology is still here to stay. Okay, so therefore, what was communicators how do weave that into the conversation of the challenges, strategic and tactical challenges that we have and the questions that we're being asked?
95
01:03:03,198 --> 01:03:33,700
Rafi Mendelsohn: And so the next six to twelve months, I think, is going to be really interesting from our industry perspective of who is going to be the ones with the best answers. Is it going to be the people in the cybersecurity world? Is it going to be technologists, is it going to be tech companies, or is it going to be communications? And it might be a combination of all of those things. And we are trying to work our way through that. But I think we need to try and continue to up our game and work through it more than just AI has helped me write a better press release.
96
01:03:34,080 --> 01:03:45,730
Rafi Mendelsohn: I think we need to look at it as a whole and really understand where the direction is going so that we can then do our jobs and we can then also guide the people around us and the organizations that we work in.
97
01:03:46,590 --> 01:04:36,972
Daniel Nestle: Brilliant. I mean, I agree 100% with everything that you're saying, and I would just, it feeds into something that I think about sometimes, and I've spoken about a little bit, which is we just don't know what is going to happen. We don't know how far we can take. Genai, it is that black box, but it is this enabler where you can say to it, hey, wouldn't it be great if, and if you can say, hey, wouldn't it be great if I could mitigate cyber attacks? Or would it be great if I could mitigate social media negativity and then sort of start to have a conversation with it and just figure out where that's going to go and see if it can help you come to solutions. There's a tremendous amount of value there.
98
01:04:37,116 --> 01:05:21,386
Daniel Nestle: I also think that the skills and the kind of things that people are going to need, you mentioned, is it going to come from technology? Is it going to come from communications? Is it going to come from cybersecurity? People? I think there's just going to be a realignment of different roles and different responsibilities as AI and other technologies take root and start to become more and more practiced. We're going to see shifts in the types of tasks that people are able to do. And what is a job. A job is a culmination of a whole bunch of tasks. So as the root elements of what a job and what a role are is changes, then it's natural that those roles will also change. Just don't know how or what yet, but I think if we're on the lookout for it.
99
01:05:21,418 --> 01:06:08,644
Daniel Nestle: And we're watching it and ready to embrace and kind of move ahead with it and try to stay ahead of it, then we're in a good place. So, Rafi, I am again very pleased and grateful that you've joined me and our listeners. I mentioned it earlier, but Raffi is in Israel today and I am here in the US. So we're talking about a seven hour time difference. It's the end of Raffi's day. It's sort of the middle of the beginning of mine. So I really am grateful for you taking time from your post work family time to come and join me and our listeners here today for this conversation. You know, if you, if anybody wants to find Rafi, just look on LinkedIn. Look for Rafi Mendelsohn. His name will be spelled properly in the episode title. Go to cyabra.com dot.
100
01:06:08,652 --> 01:06:25,510
Daniel Nestle: Again, that's, it will be spelled properly in the notes, but cyabra.com cyhabra.com, fascinating area of communications we should really all be on top of. Check out cyabra.com dot anyplace else, Rafi, that people can find you.
101
01:06:25,700 --> 01:07:04,312
Rafi Mendelsohn: I think those are the main places on x, although probably not posting, sharing content as much as I used to. But also, I would just say those are the main channels. But please also get in touch. I think this is a challenge that we're all trying to grapple with and work with together. So definitely get in touch. I don't mean in terms of business. I wear a marketing hat, so I have the luxury of that. But I genuinely mean in terms of, okay, let's see what we can uncover about the brand, but also, how can we come up with better tools and solutions to the challenges that we're facing? We might have a better ct scan, but how can we also better medical practitioners as well? So do get in touch around that.
102
01:07:04,496 --> 01:07:16,180
Daniel Nestle: Thanks, Rafi. That's fantastic. And again, thank you for your time. Thanks for being on the trending communicator. And I will see you again here sometime again, because I really need to talk more about deepfakes. So thank you very much for your time.
103
01:07:16,600 --> 01:07:17,900
Rafi Mendelsohn: Thank you for having me.
104
01:07:23,930 --> 01:07:48,220
Daniel Nestle: Thanks for taking the time to listen in on today's conversation. If you enjoyed it, please be sure to subscribe through the podcast player of your choice. Share with your friends and colleagues and leave me a review. Five stars would be preferred, but it's up to you. Do you have ideas for future guests or you want to be on the show? Let me know@daningcommunicator.com. Thanks again for listening to the trending communicator.