Site icon The Mac Security Blog

ChatGPT & AI Risks – Intego Mac Podcast Episode 278

AI has reached an inflection point; we discuss ChatGPT and other AI tools and their potential security and privacy risks. We also have an ears-on report about the new HomePod; it sounds better than expected.

Transcript of Intego Mac Podcast episode 278

Voice Over 0:00
This is the Intego Mac Podcast (the voice of Mac security) for Thursday, February 9, 2023.

This week’s Intego Mac Podcast security headlines include and “ears on” report on the audio quality of Apple’s new relaunched HomePod…A warning about using Google to find downloadable software. Legitimate looking search results could point you to malware…And the artificial intelligence revolution has reached the mainstream. Services like ChatGPT are used more and more, and tech companies like Google and Microsoft are announcing similar features for their search sites and software. What are the security and privacy considerations you should be aware of as a user of these AI advances? Now here are the hosts of the Intego Mac podcast: veteran Mac journalist, Kirk McElhearn; and Intego’s Chief Security Analyst, Josh Long.

Kirk McElhearn 0:56
Good morning, Josh. How are you today?

Josh Long 0:58
I’m doing well. How are you, Kirk?

Is Apple’s new HomePod better than the first HomePod?

Kirk McElhearn 1:00
I’m doing okay. We’ve got an interesting topic. Today we’re going to talk about AI. And we’ve done a little experiment, which is actually quite enlightening. But first we want to talk about the new HomePod. We mentioned a couple of weeks ago, when Apple announced a new HomePods. That was a bit surprising. But now we’ve had “ears on”–or at least I’ve had “ears on”–and we want to spend a couple of minutes talking about differences between the original HomePod and the new HomePod. Now I’ve been pretty bearish on the HomePod since the beginning. I thought it was overpriced and it didn’t sound good. But I’m ready to recommend that people buy the new HomePod. I think the audio is so much better than the first one. And you have options to reduce the bass, which Apple added about a year after the first one came out. So if it does sound too bassy, you can do that. I’ll link to my review on the Intego Mac Security blog. And I pointed out a number of things where it just wasn’t perfect compared to, you know, a good stereo. But the more I think about this is you can have a single little speaker that doesn’t take up a lot of space, it doesn’t look ugly, like speakers generally look ugly. And if you buy a stereo pair, which admittedly is $600, you can listen to spatial audio. The only other way you can get this Dolby Atmos spatial audio right now is with a much more expensive sound system.

Josh Long 2:17
You mentioned a couple of weeks ago, and you wrote about this when Apple first announced this, you were kind of like, Yeah, but you could buy a soundbar for you know, a lot less than that, if you’re talking about $600 to get two of these so you can get spatial audio…that’s a lot of money. And is it really worth it? Like are these even good enough? So do you really think that there’s like that big of a difference between the original HomePod and the new version of it?

Kirk McElhearn 2:44
Yeah, I do. Not only is the audio a lot clearer, but the ability to have spatial audio, and so far, I’ve only listened to it on a single HomePod, I’m tempted to buy a second one to make a stereo pair; you get that extra spaciousness. Now I do have, in my TV room, a Sonos Arc with rear speakers and a subwoofer. So I’ve got a full surround sound system. And it’s obviously different. But it’s the compactness of the HomePod and what it can do with spatial audio that I think is a game changer. Now, you admitted before the show that you never heard spatial audio, and you don’t know what it’s about. And there are currently several ways you can do this without investing in a big audio system. You could use a new iMac or a new MacBook Pro or MacBook Air. And it’s not, I mean, these are small speakers. So it’s not really great, but it is spatial audio. You could use the AirPods Max or the AirPods Pro, and they offer spatial audio. So you can try it out if you have these devices. I think the difference–and can I plug one of my other podcasts?–we do a music podcast called The Next Track with Doug Adams, our producer here. And we’ve talked about spatial audio quite a lot because this is the new thing in music. If you’ve never heard spatial audio, it’s like you’ve been listening to mono all your life and suddenly hear stereo. It’s that big a change if it’s well produced, well mixed and if you can hear it on the right type of equipment.

Josh Long 4:10
Okay, well, not everybody has $600 to spend on a couple of HomePods.

Kirk McElhearn 4:10
Granted. It’s not cheap, granted, and you can’t do this with the HomePod mini. The HomePod mini doesn’t do spacial audio. Two HomePod minis at $200 is just two HomePod minis at $200.

Josh Long 4:27
It still is surprising to me, like, in spite of your recommendation and saying that this is better than the original HomePod, I’m still surprised that Apple relaunched this product because I don’t see why discontinue it for a while and then bring it back. I just don’t get what Apple was thinking there.

Kirk McElhearn 4:47
Well, a lot of this has to do with the Smart Home features and we talked about this a couple of weeks ago. The fact that it’s got temperature and humidity sensors, the fact that you can give Siri commands and it can turn on your lights and lock your door and all that sort of thing. And I think Apple is going down two paths here. One is the path of, we’re bringing back the, I want to say high-end audio equipment, because we’re going to do more in that direction. But also we’re reinforcing the Smart Home because we are doing a lot more in that direction.

Josh Long 5:17
But the HomePod mini can do all those things, though, right?

Kirk McElhearn 5:19
The HomePod mini can do all the Siri and Smart Home. It’s got a temperature and humidity sensor that have been in the HomePod mini for more than a year. But we’re never activated until recently. So it can do all that. So this is for people who do want the better audio. And it does make me think that Apple is going to pay more attention to audio other than just speakers. I have AirPods Max on right now that I use when we’re podcasting. And I use them to listen to music and Apple’s got good headphones that they market under their brand. They’ve got Beats headphones. I think we’re going to see more. I really want to see The HomeBar: So a soundbar with the Apple TV and two, you know, stereo HomePod, Dolby Atmos surround sound spatial audio, and that’s really where they gotta go. Okay, getting back into Google News. Here’s a new Google warning. When we talked about this before we started recording, I said, “Didn’t we just talk about this a couple of weeks ago?” Until further notice, think twice before using Google to download software?

The top result in a Google search could point to malware

Josh Long 6:16
Yeah, I thought it was worth bringing this up. Because this came up in Ars Technica, they had a big article this week about how if you are doing Google searches, there’s a pretty good chance, especially if you’re doing Google searches for software, you’re likely to come across malware. And we did actually mention this just recently on the podcast, that that is something that you can come across. It can either be malware, or it can be scam websites. And very frequently, the bad guys are buying ads that show up at the very top of the search results. And they are not always, especially to an untrained eye, right? Like, if you’re not expecting the first result to be an ad, you may very well click on it. Even if you notice that it’s an ad, you may assume that a legitimate company is buying an ad so that it can get placement at the top of the search results. So they published this, this report and talked about all kinds of different malware and how, over the past several months, this has been a really big problem that Google just hasn’t been able to keep up with. As a matter of fact, you might remember that we wrote about this way back in September, about fake app store pages being the new Flash Player alerts, right? You had sent me a screenshot of…”Hey, look at this weird webpage that I came across when I was searching for some software.” And I said, “Oh, yeah, I’ve actually been seeing that too”, where they’re making fake app store pages, they look like the page that you would expect to see on apple.com that would link to the App Store app, right? But instead of that they’re offering a download. And well, it’s actually malware. So it’s just worth bringing up because this is something that other people are recognizing now, too. This is really becoming a big problem at this point. And it’s been ongoing for many, many months at this point.

Kirk McElhearn 8:12
Okay, here’s a metaphor, I grew up in New York City, winters can be very cold, we get a lot of snow. If someone’s walking down the sidewalk and slips because you haven’t cleared the snow from in front of their house, they can be liable for your injury. Why isn’t Google liable for this if someone gets scammed?

Josh Long 8:29
Well, to some degree, I would say it’s…it would be very difficult for Google to know and be able to prevent every single possible case of some malicious search results getting ranked highly. I think when it comes to Google ads, that’s where there’s a lot more liability on Google’s part where they should have humans reviewing these ads, making sure that they’re not malicious. Now, to be fair, there are ways that somebody could, for example, they could have it linked to a site that as of when Google reviews that ad, it’s not doing anything malicious. And then they change the site afterwards. So that could happen. This is the kind of thing we were just talking about recently, where there are some potential ways that people could slip something like malware or scam apps into the App Store. Because even though there’s a human reviewer, if things change after the fact, then it is possible for for something malicious to get in the app store. And the same thing can happen with Google ads as well.

Kirk McElhearn 9:38
So maybe if this Google ad platform is unsafe, then they should stop using it until they can make it safe.

Josh Long 9:45
Well, that would be nice. But of course, Google makes the majority of its money from ad revenue, right? So it seems like this is this is something that Google really needs to figure out a good solution for because this is really a problem. I mean, you know, if you’re going to make the first result and ad, which it makes sense for them to do, right, because people are paying for that first slot, then you’ve really got to have a better way to vet that content and ensure that if it does change that you’re aware of that and maybe automatically block those results, you know, don’t show those ads, if you detect that something significant has really changed on that page.

Hackers are using Microsoft OneNote to deliver phishing attacks

Kirk McElhearn 10:31
Okay, hackers are using a new trick to deliver their phishing attacks. And I didn’t know that this was possible. But apparently, if you use Microsoft OneNote, you can send a OneNote document to someone. The reason this works for phishing is that since Microsoft blocks macros in Word and Excel by default now, you can’t send macro viruses. Or at least macro viruses aren’t as effective, you gotta really convince people to turn on macros. But you can send OneNote documents to people which can have macros, and these phishing emails can sort of get through all the protection.

Josh Long 11:06
I think a lot of people don’t even realize that they have OneNote installed. Because this is not something–especially on the Mac–like who uses OneNote, right? I mean, you’ve got Apple’s Notes app, which is for a lot of people, I think, much better than OneNote. So I don’t know why you would necessarily use OneNote, unless maybe you worked for a company, you know, where you had a lot of Windows users.

Kirk McElhearn 11:29
Because your business uses it. And you have to use OneNote, because you’re working with an Exchange server, and all your data has to go on the Exchange Server. So that makes a lot of sense.

Josh Long 11:37
Yeah. And that would be like the one scenario where it might make sense to actually use OneNote on a Mac. But you may not necessarily realize that this document is a OneNote document. And you mean, you just might not be thinking about that aspect of it. But if somebody sends you a document and you’re able to open it, well, it could potentially do some malicious things. And a lot of security software is not necessarily looking at OneNote files, these are dot O-N-E is the file extension, you’ll find at the end of these documents names, if you have that functionality turned on in the Finder to see the the three letter extension at the end of a file name. So this will open up in OneNote, if you were to double click on one of these files, and it could potentially contain some malicious content. And you know, it’s just something to be aware of,

Kirk McElhearn 12:34
Don’t trust anything.

Josh Long 12:35
If anybody sends you any kind of file, you should be a little wary, you should be a little bit suspicious of it, right?

Kirk McElhearn 12:39
But if it’s your boss, and someone’s forged their email address, and you think it’s coming from someone, and the whole, you know….It’s a combination of social engineering to get people to open the file and the trickery in the file to get people to give away information.

Josh Long 12:53
Yeah, and it’s social engineering is a big part of what phishing is, anyway. Like, the whole idea is to trick that person, to social engineer that person in some way, or other to trick them into thinking that this is something legitimate. And then they fall for a scam or they end up installing malware or whatever it might be.

Kirk McElhearn 13:13
Okay, we’re going to take a break when we come back, we’re going to talk about AI.

Voice Over 13:54
Protecting your online security and privacy has never been more important than it is today. Intego has been proudly protecting Mac users for over 25 years. And our latest Mac protection suite includes the tools you need to stay protected. Intego Mac Premium Bundle X9 includes VirusBarrier, the world’s best Mac anti-malware protection, NetBarrier, powerful inbound and outbound firewall security, Personal Backup, to keep your important files safe from ransomware, and much more to help protect, secure, and organize your Mac. Best of all, it’s compatible with macOS Ventura and the latest Apple silicon Macs. Download the free trial of Mac Premium Bundle X9 from intego.com today, when you’re ready to buy, Intego Mac Podcast listeners can get a special discount by using the link in this episode show notes at podcast.intego.com. That’s podcast.intego.com, and click on this episode to find the special discount link exclusively for Intego Mac Podcast listeners. Intego, world-class protection and utility software for Mac users made by the Mac security experts.

What is Artificial Intelligence and what does it do?

Kirk McElhearn 14:35
Okay, we’re gonna talk about AI or shouldn’t it be an AI? An before I? Artificial intelligence. There’s been a lot of AI news this week. In fact, the past few weeks we’ve just it’s been piling on. ChatGPT which is this AI, interface, app, website, whatever. You can ask a question and it can give you 1000 word answer or book report or term paper or dissertation…all sorts of things. Microsoft announced that they were investing a billion dollars into OpenAI, the company that’s ChatGPT. And their eventual goal is to roll this into Microsoft Office, we’ll get there in a second, they’ve just announced this week that they’re including it in their Bing search engine. And it’s not available to the general public yet you can sign up on the waiting list. What they’re going to do for Microsoft Office is really interesting. I don’t know the last time you had a real corporate job, Josh, I mean, you’re sort of you work for a company, and you have to do emails and reports and stuff like that. But there is so much content that people generate in businesses that take so much time. And that has to be proof read, and then has to have good grammar. And imagine if you could just take like the five data points that you need, and give them to ChatGPT and tell the software, give me a report about our sales for this quarter. And it will make a fluid report. Think of all the time that would save for people in business.

Josh Long 15:59
There’s a lot of really good potential, I think, for simplifying workflows for people, right. In fact, one of the things that came out just within the past week or two, OpenAI, the company that makes ChatGPT has come out with…what are they calling it? It’s like ChatGPT Plus. Basically, if you want prioritized access to ChatGPT because you know, the free users are having to get stuck waiting in a queue now, because so many people are using this technology. So if you’re just willing to pay a little bit, well, now you get quicker and hopefully immediate access to ChatGPT. And there’s some other benefits, too, apparently that you can get from from paying for it. But if you work for a company, and you’re using this technology to sort of, again, simplify your workflows…maybe, you know, you have some programming project. And we’ve mentioned before it can be used to create malware, right. It can be used to create legitimate code for software projects as well, obviously. And so, you know, maybe it will save you a little bit of time to have ChatGPT, you kind of write the outline, and then you fix it, right. You go back in and and decide what needs to be changed in this to make it…to make it your own, to personalize it right and to make it perfect from your perspective, but at least it gives you that base, right. And you could potentially do the same thing with things that you’re writing as well. So I understand that there are a lot of really good use cases for it. But there’s a lot of potential problems with doing this as well.

Artificial Intelligence be used to create fake speech

Kirk McElhearn 17:38
So AI is being increasingly used in a lot of areas. Now we just started putting transcripts of this podcast on the Intego website. We use Otter, which is well known for transcribing audio. This is a tool that I’ve been using in some of my work interviewing people. And with interviews, it’s not entirely precise because the interviewees audio is never very good. But when I drop a podcast into Otter, I want to say it’s 98% accurate, and you only have to do some mild editing. And there’s something else: A company called ElevenLabs recently released a product to take a sample of someone’s voice, and then read a text using their voice. Now, Josh, you weren’t around in 1861, were you by any chance?

Josh Long 18:23
No.

Kirk McElhearn 18:24
I want you to listen to this:

“Josh Long” (Deepfake) 18:26
Four score and seven years ago, our fathers brought forth on this continent a new nation conceived in liberty and dedicated to the proposition that all men are created equal. Now we are engaged in a great civil war, testing whether that nation or any nation so conceived and so dedicated, can long endure.

Kirk McElhearn 18:44
That was the beginning of Abraham Lincoln’s Gettysburg Address. Now you will admit that it does sound like you, right?

Josh Long 18:50
A little bit, yeah. It’s not perfect, but it does sound a lot like me.

Artificial Intelligence can be used with social engineering in order to trick you

Kirk McElhearn 18:54
It’s got your voice timbre, it’s got the sound of your voice. Now, when you listen more carefully, you can tell that the pauses aren’t necessarily correct. The inflections aren’t ideal and this but this is just a short segment. Now I must say, I came up with the idea to do this when we were prepping the podcast. It literally took me three minutes to generate this. I took a one minute segment of your voice from the last podcast recording, I created the voice model on the ElevenLabs website. I pasted in those two paragraphs of text and it was immediately available. Now if you want to do this for longer text, it’s going to take a bit longer. But I was able to make a pretend Josh there very easily. So here’s a scenario. It’s Sunday night, you get a message on your phone from your boss saying “Listen, I’m on my way to a meeting in Dubai. It’s really important. You got to get a call from Joe Jones and he’s going to help me out with this and you’ve got to give him the password for this account so he can get this information from you for the meeting. I’ll call you back tomorrow and let you know if everything’s okay.” Click. You’ve taken the boss’s voice sample, because he’s done interviews, and he’s been on podcasts and he’s given speeches, you’ve come up with a text, it sounds convincing. You’ve edited it with some airport terminal noises in the background, you know, “Flight 2-72, now boarding” and you can get all the sound effects on the internet. And if I’m at home on a Sunday night, and I hear my boss telling me, I’ve got to give the password to Joe Jones, when he calls me, what am I going to do? I’m not going to say, I don’t trust you, you’re my boss, am I going to lose my job? You said you’re going to be on the plane, I can’t contact you. What do I do?

Josh Long 20:35
Well, this is one of those things, again, social engineering, right? This is something that can be, I guess, facilitated by using AI technologies like this. So you’ve always been able to pull off similar scams, right? You can trick somebody by sending an email that appears to be from the boss. And that might be just as effective. But if you’re actually hearing the boss’s voice, and it sounds convincing enough, right? If you if you get this as a voicemail suddenly on your phone, and you think, Oh, maybe I just missed a call from my boss, you go to listen to it, and it actually does sound like your boss, that might be a bit more convincing than an email, you know, people at this point, hopefully are a little bit more leery about things that they get in an email. But if you’re hearing a voice, that’s something that could be more convincing to somebody, I think.

Kirk McElhearn 21:33
And if you think your job is on the line, this is…you’re under pressure with something like this. And and if the person gives you a valid reason why you can’t contact them to confirm this, I don’t know what I would do. This is one of those trolley problem things, right? Do you want to trolley go straight and kill five people or do you pull the switch thing and have it kill one person. And when we were talking about the trolley problem before the show, we won’t go into too much detail. But this is going to be difficult for people who were put into this situation. I mean, I just thought this up in five minutes before we started recording. Someone who would spend a little bit more time could get into a lot more sophistication in the way the message is phrased and sent and all of that. So I think this is something we have to worry about. Going back to ChatGPT, it’s really, to quote an article, a “privacy disaster waiting to happen”. Because ChatGPT well, you can use it to create things on your own, but it’s also scouring the web to get information. And if you ask to get some information, write a report about Josh Long. Now, there’s more than one Josh Long on but there might be some personal information that you weren’t aware of that was on your MySpace page 15 years ago, that could bubble up to the surface.

Josh Long 22:41
Yeah. And if you have a more unique name like Kirk McElhearn? Well, if I ask it to tell me about Kirk McElhearn, it probably will be able to find information and extract it from the byline in, you know, at the end of an article that you wrote, among other things, right. It can find information about you if your name is unique enough. In fact, if you prompted it enough, you could convince it to get information about the right Josh Long instead of some other Josh Long.

Kirk McElhearn 23:07
We’re not suggesting that anyone do this, by the way. So Microsoft is using OpenAI’s ChatGPT. Google has just announced a chat bot called Bard. And interestingly, it made a factual error in its first demo. It was identifying a star, it misidentified a star. And obviously, someone who is an astrophysicist or something said, “Well, no, this is not what it’s supposed to be”. And this is one of the problems of these things. And I want to quote this person’s name is Grant Tremblay. And what he said is really I want to put this on a t shirt. He says, “a major problem for AI chat bots, like ChatGPT and Bard, is their tendency to confidently state incorrect information as fact. The systems frequently ‘hallucinate'” in quotes, “that is, makeup information, because they are essentially autocomplete systems.” And what he means by that is, when you start typing on your iPhone, it offers to autocomplete the words that you’re typing and ChatGPT is basically going from word to word. You know, you can play that game where you just tap the word that’s auto completed, and you get these really weird things. But ChatGPT does that, but makes it seem logical. Right?

Josh Long 24:18
Right. Well, it’s it’s using artificial intelligence, right? But emphasis on the artificial part.

Kirk McElhearn 24:25
Yes, and not so much intelligence. So we have a lot of risks around this. And what I’m surprised about is, it seems like in the past couple of months, this has just all of a sudden, been set free, right? With Dally, the art thing that came out a few months ago and ChatGPT these voice sample things now….I did an article for another website I write for where I talked about AI and audiobooks and I interviewed a well known audiobook narrator named Simon Vance. And we were talking about the existing AI feature that Apple offers to self-published authors who sell through the Apple Books Store. And when you listen to the samples, what Simon pointed out is that they lack humanity. And you can hear that in your Gettysburg Address. It doesn’t sound like you. You tend to talk with, you know, a great deal of change in frequency range and stuff like that. And these AIs don’t do that. They lack humanity, they don’t have those little touches that make things, those little changes the pauses and everything. But we did notice is when we were listening to your Gettysburg Address, there were sounds of breath intake at a couple of points. It kind of it’s adding that sort of ambient sound that you expect to hear from someone speaking,

Josh Long 25:43
Right. Yeah, there were pauses, there were breaths, but it was quite a bit too monotone. Right? I think that’s that’s one way to put it. Now, that’s not to say that there’s not other technology that would do a much better job. There’s certainly a lot of technology out there. This is just one example of a particular AI bot that you can use in, like you said, three minutes, right? It doesn’t take a long time. You don’t have to train an algorithm and all this kind of stuff. You just give it a sample, and you give it something that you want it to say and it spits out an output like almost immediately. And if you consider that that’s like kind of the the basic thing that anybody can do for free, right?

Kirk McElhearn 26:21
That’s the version one.

Josh Long 26:29
Yeah, exactly. This version 1.0. And this is free that anybody can do. So imagine what the better technologies are, that might be available to you if you’re willing to pay a little bit more, if you’re willing to train an algorithm a little bit better on how to sound more human-like….And this is obviously very complicated stuff if you’re actually talking about training an AI. But some people are doing that, right. It’s happening right now. And everybody is rushing to be the next Big Thing in AI technology, right. ChatGPT I think more than any of these others that have come before it has really opened up people’s eyes. It’s getting headlines in mainstream press every single day practically, it’s a big deal. People understand that it’s a big deal. And everyone’s rushing to implement it in their search engines and everything else now, so…

Kirk McElhearn 27:23
Now, after having listened to that last couple of minutes of what Josh said, go back and listen to the artificial Gettysburg Address, and listen to Josh’s voice, how monotonous and listen to Josh’s expressive voice. I can’t end this episode without bringing up “Mission: Impossible” because Simon Pegg’s character is always putting on these masks that look realistic. And of course, it’s not a mask. So, like, he’s putting a mask, and then it’s the other actor who’s playing him. This is what the voice thing is doing. It’s the mask for the voice and it’s gonna get better and better. And wasn’t there a simulated voice thing in one of the “Mission: Impossible” movies? I don’t remember. So we’re, we’re there. And I think it’s particularly worrying to consider all the risks that we face going forward. So we’re gonna pay attention to this as we go on. Because one of the biggest issues here is going to be the security issue of people exploiting these things for social engineering and for other security issues.

Josh Long 28:18
Right. Now, clearly, one of the big problems with this is that this could be potentially used against political candidates, right, to make it appear that they said something that they didn’t really say. Maybe you take an actual clip that hasn’t been published somewhere on the internet, somebody, like took a video on their phone at a rally, they never published it anywhere. And now they edit it to make it seem like that person actually said something horrible, right? And this gets picked up on the news, you can see how this kind of thing could be really a big problem for things like politics. But what is not as obvious maybe to a lot of people or people don’t like to think about is that this could be used against you personally. Just like we talked about these sort of phishing scenarios where it could be somebody pretending to be your boss, or any number of other things that might be just convincing enough to trick you into giving away your money, to potentially doing things that will make you actually lose your job. There are very serious consequences to the easy availability of these kinds of technologies. And so it’s something that we really need to keep a close eye on and just be aware that this stuff is out there.

Kirk McElhearn 29:32
Okay, until next week, Josh, stay secure.

Josh Long 29:34
All right, stay secure.

Voice Over 29:37
Thanks for listening to the Intego Mac Podcast, the voice of Mac security, with your hosts Kirk McElhearn, and Josh Long. To get every weekly episode, be sure to follow us on Apple podcasts, or subscribe in your favorite podcast app. And, if you can, leave a rating, a like, or a review. Links to topics and information mentioned in the podcast can be found in the show notes for the episode at podcast.intego.com. The Intego website is also where to find details on the full line of Intego security and utility software: intego.com.


If you like the Intego Mac Podcast podcast, be sure to rate and review it on Apple Podcasts.

Have a question? Ask us! Contact Intego via email if you have any questions you want to hear discussed on the podcast, or to provide feedback and ideas for upcoming podcast episodes.

Share this: