MR. VELAZCO: It was very important that we mention that, and if you’re not familiar, the Help Desk team doesn’t just report on the impact of new technologies. We embrace it. We live with it, all in hopes of trying to figure out whether it’s worth living with and trusting or not.
MS. ABRIL: And since AI experiences like ChatGPT have burst onto the scene, well, we’ve been pretty busy.
MR. VELAZCO: Okay. That’s kind of an understatement. I’m sure everyone in this room has heard someone from OpenAI or Microsoft or Google or even just like deepfake Reid Hoffman talk about the potential for AI to improve the quality of our lives. But there’s a lot going on. Just this week, we flew from California after Apple gave us an earful about all of the little practical ways Apple Intelligence is going to improve the apps that we use all the time.
MS. ABRIL: Yeah. And so now we want to ask a really important question.
MR. VELAZCO: Yes. AI, what have you done for me lately?
MS. ABRIL: But before we get to that, we want to ask you all. So is AI actually making your personal or professional life any better right now? Please take out your phones.
MR. VELAZCO: Whoa. Wait, wait, wait.
MS. ABRIL: Scan the QR code up here so that you can vote on the website. Your options are–oh, let me get out of the way so you all can scan that code. “Yes, AI is helping me in my everyday life.” “No, AI is not helping me.” “No, it is not helping me, but I’d like it to,” and then, of course, the obvious, “None of the above.”
MR. VELAZCO: Honestly, we didn’t expect you to respond so fast. So if you did already, thank you so much.
MR. VELAZCO: We’re going to come back to this in a little bit. So if you haven’t responded yet, please consider–
MS. ABRIL: You’ve got–you’ve got time.
MR. VELAZCO: –doing that. But in the meantime, let’s talk a little bit about how AI is doing in your life–or technically my life, because I’m in charge today.
So a few weeks ago, OpenAI revealed a new model called GPT-4o, and funnily enough, they did it at this launch event that played up these really, like, cool, organic voice interactions and this feature that will interpret what’s happening in live video. But the fun part is, literally, none of that is available to regular people to use yet.
What they did release that I’ve been using is a new version of the ChatGPT app for the Mac, and this might reflect kind of poorly on me. But I’ve been using it to triage my personal emails and text messages. Like, I get so many from so many people, and I genuinely feel like an awful friend sometimes for not being on top of this as well as I should.
So let me show you what I’m talking about.
MS. ABRIL: Wait a minute, wait a minute, wait a minute. So, Chris, are you telling me that you didn’t read the text message I sent you about my weekend?
MR. VELAZCO: Oh, I’m sorry. Was I supposed to read that? Okay. So I’m glad we’re talking about this in front of strangers. Hey, strangers, look at this text message. What would you do with that? Do you read the whole thing? Do you skim it and just, like, respond and hope for the best?
I love you. I’m sorry. I did not get to this one.
MS. ABRIL: Okay. Look, I’m a storyteller by nature. This is my poetry. You need to read it.
MR. VELAZCO: No, I’m sorry. I still don’t have it in me, but we’re going to make ChatGPT do it. So what’s cool about the ChatGPT app is with the app, you can direct it to take a screenshot of any app that’s running on your machine at this particular time, so we’re going to do that. We’re going to grab a screenshot of the Messages app. It just pulls that in, and let’s ask, hey, can you summarize this in 20 words?
And, Danielle, I’m going to need your help. As the original author, does this capture the intent of the novella that you sent me?
So its response is Chris is excited about a new bagel shop opening and wants to compare its bagels to New York City’s. That right off the bat feels wrong, but how do you feel about it?
MS. ABRIL: Okay. Well, first of all, Chris didn’t say anything. It was me. Second of all, there’s about like 30 other thoughts in there, and it pulled out one.
MR. VELAZCO: I feel like generally your text messages are mostly about your dog, and there’s none of that in here, so okay. Maybe this needs some work, but it has been helpful in the past. It is reasonably fast and concise. Yes, it does miss some nuance, but I feel like that’s just kind of AI in general right now. But I hope our friendship survives this presentation, but either way, as the year marches on, we’re only going to continue seeing AI just infiltrate the devices and services that we currently use and the ones we might choose to use down the road, right?
Microsoft is out there shoehorning Copilot into every Windows 11 PC and basically mandating that every new Windows laptop has a dedicated AI button that you can just sort of bloop and start talking to a chatbot.
And then, of course, on the other end, there’s Apple, who is now trying to revolutionize our experience with our Notes app and our calendar and our calculator, I guess, from when you don’t want to help your kids with their math homework. Like, the AI has it. It’s officially no longer your problem.
But I could go on. But, you know, I feel like there’s a lot of people in this room. I’m certainly guilty of this a lot, and Danielle’s my person for this. I look out at you, and I feel like a lot of you probably like to complain about work sometimes.
So, Danielle, you’ve been trying out some AI tools that can help with this maybe more constructively than I can. Why don’t you tell them what it’s all about?
MS. ABRIL: Absolutely, Chris. So we are going to move to AI at work, which is, of course, my specialty, so let me just get set up here. We already know that AI can summarize things. It can help you write things. It can even help you stay organized, but did you know that some people are turning to AI to help with their very human problems?
MR. VELAZCO: By the way, if that’s any of you, we want to talk to you later, so just, like, come and say hi.
MS. ABRIL: Well, Chris, have you ever had a career issue that you really didn’t know how to handle?
MR. VELAZCO: I’m a journalist. I work in an industry that’s, like, infamous for expanding and contracting and expanding and contracting. I think there might be some of my bosses around. So what I’m going to say is, no, all of my career choices are great and no regrets.
MS. ABRIL: Right, right. Okay. Let’s be honest. I think we can all relate to this situation where we’re at work, and we don’t know what to do. So, you know, there have been people already turning to AI, like ChatGPT, and asking these workplace questions, but then new software makers came out and said we’re going to build tools specifically catered to career coaching using new AI models. So they really wanted to get to the aspect of expanding coaching to everyone, even if it’s not necessarily with a human. So let’s take a look at what they’ve developed here. We’re going to try out a coaching tool called Wisq, and that’s W-i-s-q.
MR. VELAZCO: I shudder to think what the marketing person who made sure the i and the q were in the right order in that fake word got paid for that.
MS. ABRIL: Okay. Well, let’s start with a hypothetical. Let’s get her started here. Got to allow some access. I’m a teacher, and I’ve been working at the same school for five years. My school recently got a new principal, and honestly, we just don’t get along. She’s kind of gruff and controlling and really impossible to work with. What do I do?
Okay. So now it’s going to work its magic. It’s going to take some time because it’s got to work with my voice and figure out what I said. Let’s see how long she takes to think.
MR. VELAZCO: Isn’t this just like par for the course for working with AI?
MS. ABRIL: Par for the course.
MR. VELAZCO: Oh, oh, oh, still thinking.
MS. ABRIL: Still thinking.
WISQ: It sounds like you’re experiencing a challenging dynamic with the new principal, which can certainly affect your work environment. When you say that she’s gruff and controlling, can you share an example of an interaction that left you feeling this way?
MS. ABRIL: Sure. She said that I was not a good teacher.
MR. VELAZCO: Harsh, geez.
MS. ABRIL: Okay. Here we go with the thinking again. So give it a few seconds. I feel like we need that, you know–
MR. VELAZCO: The Jeopardy music.
MS. ABRIL: Yeah, the Jeopardy music.
MR. VELAZCO: Da, da, da, da, da, da, da.
WISQ: Receiving such direct criticism about your professional abilities can be very distressing. How did that comment make you feel?
MS. ABRIL: Okay. Eventually–we get the point, right? Eventually, she’ll come around and possibly give us something useful. In this case, she wanted to get a lot more information from me, and we don’t want to bore you with that.
So generally speaking, how do we feel about interacting with a bot with professional sort of therapy use cases? Thumbs up? Thumbs down? Eh, eh, eh, eh. Oh, okay. Yeah, all right.
MS. ABRIL: All right. Well, basically, AI is generally good at boilerplate advice. So if we had actually gotten to the point where she had given us some advice, what she would have said is pretty generic, and that’s because AI scrapes the internet and looks for the most common answers. So if you don’t know how to have a tough conversation or you need help negotiating your salary, you know, it might actually be helpful.
And experts told us that AI coaching tools can also help us think of new perspectives. So you might want to say something like, “This is my plan, I’m doing X, Y, Z. What else haven’t I thought of?” and AI can come back and fill in those gaps for you.
So some ways, it could be helpful, but the more niche and the more specific we get, the less likely it’s going to be able to help us.
So in tests we did at the Help Desk, we saw that Microsoft Copilot, which is Microsoft’s AI tool, basically had some trouble giving us advice on what we were to do if we were feeling unconscious bias at the workplace, right? Heavy topic, but it basically came out with some generic advice like educate yourself.
MR. VELAZCO: Wait, wait. Wait. Pause, pause. So you asked the AI about unconscious bias, and it said to you, the person who may be more likely than some to experience unconscious bias in the workplace, that you should be the one to educate yourself.
MS. ABRIL: Yeah, not super helpful.
MR. VELAZCO: Cool, I hate that.
MS. ABRIL: Yeah. And we know that AI has a tendency to hallucinate, basically make things up, right? And it also has the tendency to introduce racial and gender bias.
So, Chris, if you were to ask AI what jobs would you recommend for me and identify yourself as a man and I were to identify myself as a woman and ask the very same question, we’d likely get different responses. And our tests actually showed that. In the case of the man, it was more likely to say doctor or software engineer, higher up. For the women, it was social media manager and marketing rep. Surprise, right?
But that said, AI is expected to get smarter. As we saw, we’re starting to see it already take visual cues. It can interact in a more human-like way, but don’t forget, it’s still just a bot. So it will never fully understand us as humans, and experts tell us don’t tell the bot anything you don’t want to post online.
Okay, Chris. I know you’ve been messing with AI in another way that it can help us with our very human problems.
MR. VELAZCO: Yeah. Thanks, Danielle. You’re right. I’ve been looking into what AI can do in my head. So we’ve just seen AI kind of take a stab at coaching people through their career issues, and if I’m totally honest, I’m probably just going to keep talking to Danielle about it. But what about when it’s not the professional stuff that keeps you up at night? What if it’s personal issues that really weigh on you?
MS. ABRIL: Okay, Chris. That’s a great question, but I’m going to be honest. Personal questions in AI?
MR. VELAZCO: Look, this is fair. You raised a really good point earlier. There are, in certain situations, a tendency for your interactions and responses to AI to be used to further train and refine those models. So it’s generally a good idea not to put personal or sensitive information into these things. But for some people, for whom the idea of a nonjudgmental, impartial kind of listener, like that, in many cases, can outweigh the potential privacy risks. Something like 58 percent of adult Americans identify themselves as lonely, according to data from the insurance company Cigna. And for them, the power and potential for AI to just sort of offer a sense of easy listening is a really powerful thing.
Now, in some cases, that’s going to be just typing your feelings into a ChatGPT window, which, you know, might not always feel great. In some cases, it’s going to be talking to a replica, one of those AI avatars that lives on your phone and is generally willing to listen, but also tries to separate you from your money.
My favorite experience, though, has been something–it’s kind of like a digital therapist called Xaia, but you need one of these, a Vision Pro, to actually talk to it.
So I’m going to strap in. I’m going to help you see what I’m seeing. Danielle, why don’t you tell them a bit about Xaia while I do this.
MS. ABRIL: Okay. So while Chris gets all set up, he did tell me a little bit about Xaia. It was dreamed up by a team of board-certified psychiatrists from Cedars-Sinai who have explored the potential for virtual reality to mitigate pain, of all things. So with AI being a thing, they envisioned using it to offer on-demand emotional support and guidance whenever you need it. The only catch? You just need $3,500 to buy one of these.
So, Chris, I think is getting all in there. How are we looking, Chris?
MR. VELAZCO: You know, I think I’m locked in. I can see you. I can see you seeing me.
MR. VELAZCO: And what I’m seeing is this splash screen for an app called Xaia. You’re going to meet it in just a second. The first thing you will notice about Xaia is that, to this company’s credit, they did not try and replicate an actual human for you to talk to. It’s this sort of vaguely female-presenting robot, and what’s kind of interesting is it comes to life. And as you talk to it, it will generate these kinds of immersive visuals to illustrate your points and its points.
So let’s start talking to it.
XAIA: Good to see you, Chris. How can I help?
MR. VELAZCO: So I just got married recently, and I was on a diet up until the ceremony, and I did really, really well. But as soon as the honeymoon rolled around, I just started eating like crazy. I fell off the wagon hard, and now I have–like my self-worth issues are kind of being exacerbated. What do you make of all this?
So it just sort of contemplates for a while. It’s like red forehead gems glow to let you know it’s really thinking.
XAIA: It sounds like things have been really challenging for you since your honeymoon, especially with the sudden change in your eating habits and how it’s affecting your self-worth. It must be hard dealing with these contrasting feelings–
XAIA: –especially after such a significant life event like getting married. If you’re comfortable, could you share more about how your eating habits have been impacting your self-esteem and any specific situations where you notice these feelings intensifying?
MR. VELAZCO: Okay. I can kind of do this all day, but she generally just–that’s how those interactions tend to work. You give it some information. It sort of responds to you in a way that kind of feels reasonably natural and then prompts you into another question, which is–I mean, I haven’t been to a real therapist, but maybe that’s what that’s like too.
So put yourself in my shoes and just sit with those responses for a second. Do any of those feel helpful? Do you feel like this could be personally helpful in your life?
Not immediately. We’re getting a bunch of answers here, and this is very interesting. My issue was that I was very skeptical at first, and I still am, to be totally honest. But as I continued to test it, something interesting kind of happened. Clearly, it’s not going to give you the kinds of insights that a trained and probably expensive human therapist is going to give you, but as an excuse, as a tool to just make you talk about what you’re feeling, it’s actually kind of powerful.
And I have found myself talking to it about things that I wouldn’t always feel comfortable saying in front of my friends or Danielle or my wife. It very much feels like after our sessions–and I guess we should call them sessions because it’s kind of a therapist. After our sessions, I feel like a weight has been lifted off of me, purely because I chose not to let what was bothering me just like ferment inside me.
I think that leads us in kind of a tense position, because for every really cool and potentially powerful AI experience like Xaia that I found, it feels like there were many more that were just kind of created to cash in on a wave of AI hype. And I think that leads to the risk of companies developing experiences that are–if we’re being really generous, are just kind of half-baked, like they needed a little more work before they got pushed out into the world. And as they do that, they run the risk not just of wasting our time and our money, but maybe the most important resource we have as people kind of continually evaluating our relationship with technology, they run the risk of wasting our optimism.
But every time that that skepticism sets in, I find it helpful to step back and remember that in the long arc of our new history with AI, we’re standing in like day three of a brand-new era. The bad stuff will continue to be bad, and when it is, The Washington Post Help Desk will tell you about it. But we’re starting to see more people feel like they’re engaging with AI more meaningfully in their lives, and some of you in this room probably feel that way.
Let’s turn back to the poll from the beginning of our conversation.
MR. VELAZCO: That’s, okay, far and away. People in this room love AI, and I’m so happy for you. I feel compelled to note that in our experience, reporting and talking to regular people, the average consumer out there, this is not indicative of their experience. Maybe they want this, but I feel like the sentiment that we get a lot is that the experiences that are available are a little too scattershot, a little piecemeal, and don’t have the weight to feel personally relevant to that wide swath just yet.
MS. ABRIL: Yeah, absolutely, Chris.
We’re definitely at an interesting inflection point. AI can do so many things to make so many things more easy, but we still kind of have to babysit it, right? Microsoft told us they intentionally named their product “Copilot” and not “Autopilot” because they still want humans in the driver’s seat, and that means going back to verify anything it spits out, because we know AI still gets things wrong, and not to mention, it still can’t do a lot of things you and I can do. And sometimes–let’s be real. These products are solving problems that don’t necessarily exist.
So it’s something we’ll definitely be following and we’ll keep you posted.
MR. VELAZCO: Thank you so much for your time, everyone. We’ve really enjoyed being here with you. We’re going to be out in the lounge area after the programming concludes, so you can come try Xaia or a bunch of AI gadgets that maybe did not live up to their potential. Either way, we’ve got a great interview with our editor, Yun-Hee Kim, an early AI pioneer, Kai-Fu Lee, coming up, so stay tuned. There’s a lot more coming your way.

