Blog

5 Pieces of Artificial Intelligence News - June 9th 2023

Explore our guides to understand Business Intelligence and its importance. Read all of our blog posts regarding 5 Pieces of AI News - June 9th 2023

Product
September 16, 2023

00:00
Speaker 1
Next. Let's get into it. Yeah. First topic, Mark Zuckerberg talking about how they're going to be open, sourcing more and more stuff, increasing the investment in AI after the investment in AR got basically vetoed by all the shareholders, like everyone else are pivoting from what they.


00:17

Speaker 2
Were doing earlier, doing AI.


00:20

Speaker 1
Let's swim down. Yeah.


00:22

Speaker 3
What's everybody's thoughts on it?


00:24

Speaker 1
The thing I took it off is just a little bit of history of, like, how good Facebook is at this. Like, they're an absolutely top tier organization in AI. A little bit of history there's. Like the packages that get created for actually creating these models. Originally, you had TensorFlow, which is Google's product. It was bulky, it got the job done. It was the best thing out there. When Ryan and I were in grad school in 2017, PyTorch was in its infancy, which was Facebook's answer to TensorFlow. PyTorch was dramatically better and is now absolute industry standard, like what you should be using for all the underlying code you're writing for these large language models. So not only did they create the framework that is most popular, they've been releasing cutting edge open source models for a really long time. And part of the benefits that they get from their commitment to open source is that they're able to have fantastically talented people who want to work on AI stuff but want to see it open source.


01:22

Speaker 1
Suddenly, Kuhn works for Meta, and he's one of the top tier people in the field. And he's publicly said that one of the reasons he wants to work Facebook and wouldn't work for other companies is that a lot of work to be open source. So they have a big talent advantage as a result of their position on open source.


01:41

Speaker 3
Yeah, totally. The neat thing is this is a big announcement, but in fact, I'll use this for PR purposes, but Facebook and then Meta have actually been quietly leading AI research for a long time. And they had Fair, it was a regional name back when Paul and I were in grad school. A lot of people were actually applying to work there because they're really cutting edge AI research. Facebook AI research lab, I think that was called Metai. They'd been a leader in this research for a long time.


02:07

Speaker 1
Yeah, it's funny how they announced it too. I don't know if you guys saw at the Meta Summit, but they basically said, oh, AI. You mean what Facebook has always done from the beginning? And so they're trying to spin this narrative of, oh, we've always been working on AI from 2006 on. We're always an AI company, obviously, which I think is kind of true.


02:32

Speaker 3
I think everyone in the world is doing that now. And Facebook is actually one of the companies that can credibly say that.


02:37

Speaker 1
Yeah, sweet.


02:39

Speaker 3
The other kind of buddy. Sorry, Kendrick, I was going to say.


02:42

Speaker 2
How significantly have they invested their efforts.


02:46

Speaker 1
In AR very significantly. Zuckerberg basically hopped on the call and was like, listen, we hear you shareholders. We're just going to chill on this. That's funky.


03:00

Speaker 2
I thought one of the reasons why.


03:01

Speaker 1
How the company worked.


03:04

Speaker 2
Yeah, I thought one of them I thought one of the reasons why Apple positioned in the spatial computer space less so than the AR space is because they were worried about how that positioning would hurt if Meta, who already owned like the AR space, had that head start. And they tried to say, oh, we're also now jumping into the same space. I read something that April Dunford posted about how, yeah, they leaned out of that because they didn't want to compete with Meta.


03:36

Speaker 1
I don't know if that does not seem like an apolly move to me because they just royally screwed Meta, like, in the worst possible way when they were like, yeah, we're not going to let use cookies on for any tracking. So they basically were like, yeah, your whole business model, screw that. They do not care about p****** off Facebook. They don't care at all.


03:58

Speaker 3
So I would be very surprised if.


04:00

Speaker 1
There were any considerations about not p****** Facebook with the products.


04:04

Speaker 2
I don't know if it was for not p****** off Facebook as much as it was, hey, Facebook already has a really significant head start in this space. If we try to then come in and be another AR product, we might lose some of that market share or not be able to make it up as quickly as we would like to.


04:24

Speaker 1
I think that would probably be true if it was a different company, but Apple just comes in and eats for industries. AirPods Alone are like one of the more valuable companies in the world. Just AirPods alone, stand alone. It's just unbelievable. They just come in and eat stuff. So I don't think they care. They're just like, we're going to take the entire market of this thing, whether that's like the huge thing that we all have in Front headphones or whatever it is, they just have it in there. Like, we're going to dig. That's a good point. The big statement about the Apple Vision product, right? And he had a big internally email to the employees that leaked us saying, yeah, theirs is so much more expensive than ours, and ours is so much more accessible. And it was a little bit, but you could tell that the competition lines.


05:14

Speaker 3
Are being drawn in that market as well. That's probably just going to increase the.


05:17

Speaker 1
Animosity between Apple and Meta. Yeah, for sure. Interestingly. Palmer Lucky also said that he loves the Apple thing, but seeing through the sort of, like, transparency thing, his thought is generally that's not going to be helpful. That that's just going to make the whole device more expensive and they're going to have to scale that back if they want to get it down to a price point that can be competitive. Sweet. All right, guys, number two. Following discussions with US leaders, the UK, home to Europe's largest AI industry, plans to host a global summit. This auto to tackle AI risks. Given prediction of AI displacing 400 million workers by 2030, what are your thoughts? Ryan, you go first. I'm going to come out slinging. Okay.


06:10

Speaker 3
400 million workers by 2030 is absolutely dumb. That's in six and a half years, 400 million is the entire population. Every man, woman and child in Europe had a job, but, like, within the next six and a half years, that's crazy. But the good thing is they're having a summit, which is going to save those 400 million jobs. Now, summit never mind. Go ahead, Paul.


06:36

Speaker 1
No, I was going to say this is just absolutely dumb. This is some academic, like, pontificating on stuff that he has no idea about how it works in real life. It's if you try to take this top down view of, oh, yeah, this is all these things are going to get fully automated and these jobs are going to go away. And it's like economies and jobs are evolving systems. It's like they always evolve. Parts of them die, usually slowly, even when it feels like it's going fast. It's actually slowly if you look in history, but it's like these are just evolving systems and it's like it's going to evolve. It's not going to dramatically change everything. Since the 50s, people have been saying, AI is going to come in, take all these, like, hundreds of millions of jobs with no one's going to do anything and everyone's going to work 5 hours a week.


07:14

Speaker 1
People have been saying that since the great with huge amounts of academic studies to back them up all the time. This is another instance.


07:23

Speaker 3
These people had been around in, whatever, the early 19 hundreds, they would have tried to legislate against the tractor and they would have said, oh, there's so many people toiling in the fields right now and we want those people toil instead. And this technology improves lives, like, technology consistently improves lives. And AI is going to be the same. It's going to be a valuable tool and it's going to lift up everyone that uses it as well. So I don't know why they're trying so hard to actually slow down the momentum of what's going to be a very valuable thing for society.


07:53

Speaker 1
Yeah. And the other obvious, everyone's like a Luddite, but the Luddites were actual people who went and burned loom because the basic thesis was the same thing as this, which is like, what are humans going to do if we're not making our own shirts with our bare hands? There's nothing left for us to do. That's it. We solved everything, we solved life. That's it. Maybe we still crap in holes and can't even visualize, like, what was even possible. So it's just another thing where it's like, yeah, it's going to change some stuff, just like the loom change some stuff, but it's good get better for you.


08:27

Speaker 3
Trying to slow that down or stop it is just the wrong approach. And one of the few certainties we have in this world is monotonic technological process progress. You can't put that genie back in the bottle, right? Like, it's not going to go away. I guess we can't say the technology has been universally good, but it has done far more good than bad. And our job is not to try and stop that because it's impossible, but it's to invent structures and systems that make sure that we can harness it for the good of everybody. And I think that's the right approach to not just sounding the alarm over AI. And this reactionary knee jerk thing I think is the absolute wrong approach.


09:01

Speaker 1
Yeah, but it's also you got to think meta about what are you actually saving? Are you saving a lot of people toiling by putting some number from one PDF document into an Excel document to then send that Excel document to another human who's going to take that Excel document and put that number from one Excel document into a different software system that then generates a PDF? It goes back to the first guy like, come on, this is not what.


09:24

Speaker 3
Humans are supposed to be doing.


09:26

Speaker 1
Wow, Paul is spicy today.


09:30

Speaker 2
He drank his coffee this morning.


09:32

Speaker 1
He's got some fire to him today.


09:34

Speaker 3
Paul chose violence. Chose violence today.


09:39

Speaker 1
This next one is going to make you even more creative. So Matt Clifford, A-U-K. AI task force advisor, warns that humans have about two years to control and regulate AI before it becomes overwhelmingly powerful and potentially threatens humanity, emphasizing the need for regulatory framework.


10:02

Speaker 2
Do we have two years?


10:03

Speaker 1
Yeah, I can go first. You're on. He's right and wrong, so I think he's in the sense that you only got two years, I think he is a little bit wrong in the sense that it's already too late. Like, Picat is already out of the bag. There's already open source models that maybe not don't perform, like the same level closed source ones, but they're pretty good and the cat is already out of the bag in terms of dissemination. Once you have open source stuff moving around the Internet, you cannot regulate it. Like, people will just move to a different country or just do stuff covertly and you just can't regulate it. Conversely, I don't think it's a threat to humanity again, people have been saying this since the every single time, but like, this time it's different. This time it's a threat to humanity. This time it's going to destroy everything.


10:50

Speaker 1
And given that maybe at some point they're right and it does threaten humanity, but I'm a little bit too empirical to say, hey, you've been sounding this alarm since the humanity has yet to be threatened. In fact, it's basically just been good for humanity.


11:07

Speaker 3
No, Matt and Matt is a very he's a guy whose opinions I really respect. He's a very smart guy and he's actually a very level headed guy as well. So I think on the spectrum of people who I would take that sort of alarm seriously, like Matt would be like high up on the I would take that seriously scale, I still think he's I don't agree with him in this case. I think that approach actually misses some of the technological limitations, for starters, of these models and what he's describing, there is this kind of super intelligence, runaway rogue AI scenario where it manages to make itself smarter and smarter at a faster pace than we can comprehend. And all of these rogue AI things depend on that, right? There's always that self reinforcing loop. And empirically, we have not seen that thing getting so fast that we can't keep up with it.


11:52

Speaker 3
Even now, some of the most advanced models, like GPT Four, if we wanted to improve on that ourselves or if it wanted to improve itself or anything, that model took years to trade. It's not like that's going to suddenly become like this thing that we just can't keep a reign on so quickly that we wouldn't see it coming. I don't see that changing the next two years. Never say never. We don't know what other feature is going to bring in the long run, but I think that the risk of that is low in general for a bunch of other reasons. And if it happens, it's definitely more.


12:20

Speaker 1
Than two years away.


12:22

Speaker 2
Nice.


12:23

Speaker 1
True.


12:24

Speaker 2
What do you thought is the fear, and I don't know if I have as concrete thoughts on this topic, but is the fear that we enter I robot Will Smith era where these AIS and these robots are actually looking to dominate the social hierarchy and put humans, is that the actual fear? That people are much more nuanced, where people just lose out on jobs and I don't know, how does that fear actually manifest itself? Because to me, it seems so cinematic that it doesn't seem real. It definitely doesn't seem two years away. I don't know.


13:05

Speaker 1
I would say that or go ahead. I was going to say the doomer camps normally fall into two buckets. One is AI is going to ascend basically the top of the food chain, and it's going to be a robot where it's like, yeah, these humans may or may not be useful.


13:22

Speaker 3
They're cute as pets.


13:23

Speaker 1
It's like no top of the food chain, and we are not top of the food chain anymore. That's like the AI hellscape scenario. The other scenario is theoretical, like, paperclip problem where it's like you tell some super powerful, super intelligent AI to produce paperclips, it raises the earth and destroys all humans in the process of trying to produce the most paperclips. It's like ultra powerful, but has no sense of what it the Rams agent, basically.


13:56

Speaker 3
Satisfying and just the byproduct of that we get wiped out.


14:02

Speaker 1
So those are the two most common kind of like dumber scenarios, I think.


14:06

Speaker 3
Though it's funny even we've seen with Rlhf and some of the stuff that's happening with the current generation of models, it does an excellent job of ensuring model safety.


14:15

Speaker 1
Right.


14:15

Speaker 3
And we've seen huge steps I think people have probably forgotten like about five years ago when Microsoft released its chat bot Tay on the Internet.


14:23

Speaker 1
Yes.


14:23

Speaker 3
And Tay like, I don't know what they thought was going to happen, but it did not have these sort of safety protocols that are built in technologically within 24 hours. It was already like spewing hate speech, basically after the internet taught it. And I don't know what they thought was going to happen, but they opened this thing up to the Internet. Nowadays that doesn't happen with stuff like GPT and we see it getting increasingly safe, in fact, as we get better and better at tuning these things. But a lot of the intermediate steps when you're building the most sophisticated models are explicitly for safety and they seem to be doing their jobs very effectively. I will say one other interesting thing, though. I think the other interesting scenario that is going to come out probably within two years, and it's not some crazy doomer thing, but I think we're going to see these models become good enough.


15:10

Speaker 3
At emulating sentience that there's. Going to be a lot of smaller groups that are fighting for AI rights, and they're worried about human rights abuses that are for AIS and things like that. And there's going to be a big debate over what you can and can't do to an AI once they start getting better at emulating people. We've already seen that happen a couple of times.


15:29

Speaker 1
Right.


15:29

Speaker 3
There was that guy that quit Google last year because he felt that the model had become sentient and it was being mistreated and it really wanted to escape and live and it actually couldn't. So I think that's going to keep on making friction with certain groups of people as well.


15:42

Speaker 1
Yeah, unfortunately, I think Ryan's right. All right, so jumping on Microsoft. They're in a new lane now. So Microsoft has unveiled three commitments to ensure the trustworthiness of its AI solutions, including the Azure open source service for the government, which provides secure access to AI tools to hand or handle large data quantities and ensure data won't be used for model training. What are your thoughts on Microsoft and this piece of gossip? I think it's exactly what you'd expect. It's just classic Microsoft coming in and increasing that bundle size that they sell to giant enterprises, one of which is the government. So classic Microsoft news that bundle will just keep going up 5% as they tack on other things and they'll just keep eating more and more of software and infrastructure. World they're juggernaut.


16:40

Speaker 3
I think it's a value prop though. I think what Microsoft is doing is actually pretty smart because I think there's genuine demand for this. I think a lot of people who are consumers of AI now are concerned how are these questions being used, how is my data being used? And it's the ultimate privacy question.


16:55

Speaker 1
Right?


16:55

Speaker 3
And it's are you going to use me to train your future model? And I think a lot of people are going to want a service like this, rightfully so. I think what they're offering is very much in market.


17:05

Speaker 1
So are you saying it's time to buy all the Microsoft stock we can? Is that the move right now? This is not financial advice. Not financial advice, yes. Disclosed. There not financial advice. Just Paul is long microsoft. Drew, what are you I am probably.


17:28

Speaker 2
Pro Microsoft as well.


17:30

Speaker 3
I love that.


17:31

Speaker 2
I think there yeah, there are a few other companies that I'm more pro. I'm actually pro Tesla. I would definitely buy and hold Tesla for a very long time.


17:39

Speaker 1
Yeah.


17:40

Speaker 2
That's my.


17:43

Speaker 3
Microsoft has made so many smart acquisitions over the last six or seven years. I think people forget how on point some of those acquisitions were. Even now you use Microsoft Vs code to write your new AIL go and at the same time using GitHub, Copilot and GitHub itself which are also owned by Microsoft. At the same time you're ignoring requests from recruiters on Microsoft LinkedIn, but there's all these different services which are quietly Microsoft owned. That all you can start to see where the streams converge and you can see where they all fit in together is like pieced as a broader strategy.


18:16

Speaker 1
Yeah. And I think that's also a testament to the CEO Bill Gates put in place I forgot his name.


18:23

Speaker 3
It's not Volvo, not Volvermer in my opinion.


18:28

Speaker 1
Yeah, he's done a wonderful job and I think Ryan, as you said, quietly putting this basically an empire together that.


18:35

Speaker 3
People are unaware of.


18:37

Speaker 1
But yeah, pretty insane. He's retained the best part about Microsoft which is their ruthlessness. Just absolutely cold hearted, just ruthlessness with making them actually friendly to the technical people who want to work with them. Acquiring GitHub was a brilliant move. Doing Vs code just absolutely turned around Microsoft image. Got it.


18:59

Speaker 3
I would say actually a testament to how quietly they've done this is that there's no M and Fang, which is amazing. Good point. Very good point.


19:07

Speaker 1
Sweet. We're onto the last topic for this week. WordPress introduced an AIpowered writing assistant, jetpack AI assistant capable of creating and editing blog posts, correcting language errors, translating among twelve languages, and adjusting text tone and style. Although human creativity remains critical in content creation. What are your thoughts on WordPress here?


19:34

Speaker 2
How are they maintaining the necessity for human creativity? Because I think that's one thing that I am so not impressed with any of the AI content writing solutions I've seen I think it's so easy to screenplay see right through screen, keeps going, all of those different toolings, right? Like you can tell when you're looking at GPT four generated blog, whatever it is. And I don't know, I feel like the more that people use them, the more obvious it's going to be for those that don't use them to stand out. And I think that's a big thing in marketing. It's like, how can you stand out from the rest of the pack so that you can show your product and your service in a really strong and powerful light and resignate with the market? And I think people that lean away all these different tools are going to be able to do that best.


20:38

Speaker 2
But then again, maybe they have a really powerful way to integrate human creativity and not make it corny. I don't know. I just haven't seen one yet, and I've seen so many of these tools come out and yeah, always underwhelmed, I.


20:52

Speaker 1
Think, largely about how you use it. Something Ryan and I were just talking about, where if you use it to try to do the whole thing, just like if you use it in code to try to generate the whole system you're trying to do, it can do a pretty bad job. It's going to be hard to figure out what you need to change to make it actually good. Whereas if you use it to take away the rote parts of writing, but not those kernels that are the actually important stuff you can be writing, you as a human get the main idea, the kernel, the hook, the things that get people excited, and then you need a few sentences of description. Bam, bam, tab. That's done. Now you need to have the next important point. So you still need to craft and own what you're creating.


21:34

Speaker 1
But AI is an accelerant, so if you just use it by itself, though, it probably takes us as much time to edit as it does to create. If it's going to be something that's.


21:43

Speaker 3
Really quality, I think it's really fascinating that Paul's point, that sort of size of input seems to be consistent across modes of writing or modes of functionality. And so he talks about how, like, you see, Paul, you can't use a to write a whole blog post. You can maybe use it to write a few sentences and then tweak those yourself, et cetera. We see. Using GitHub Copilot, if you try and ask an AI to write an entire three pages of code for you'll probably spend more time debugging it than it's worth. Like, you'd probably not ask to do that, but if you ask it to write individual functions which are shorter, it can save you a lot of time. So we're zoning in on this atomic unit of AI query quality, basically across functionality. I'm curious to see if that window gets bigger as the AI has improved.


22:31

Speaker 3
Will there come a time when making sense to write a whole blog post with GPT Six or whatever it is? I think it's interesting. I think on the WordPress thing, it's interesting because in terms of writing quality, all these tools are actually bound together, right? And they're all dictated by what the best foundation model can produce, which in this case probably OpenAI GBD Four. So they can't really compete on quality of writing very effectively with each other, or with OpenAI for that matter. So I think we're starting to see winners emerge based on UIUX and just how they package it and the overall experience. You can argue that Jasper is the winner because they have a really great UIUX where they do a good job canning prompts, but they also allow for flexibility. And notetaking, people are seem to be adopting notion AI because they have a great UIUX, for example.


23:15

Speaker 3
So for WordPress specifically, I would say that they are a valuable tool, not necessarily known for great UIUX. So I think I could be wrong, but I don't know if they can compete that effectively in that arena.


23:27

Speaker 1
But I guess time will tell. The advantage they have is that they're the system everyone's already writing in. You don't have to go get another tool.


23:34

Speaker 3
Yeah, that's true.


23:37

Speaker 1
They're already there. Sweet. That's it guys.


23:42

Speaker 3
That was the five.


23:43

Speaker 1
We are wrapped.


23:46

Speaker 2
One other question I think would be interesting for you guys to get into. I don't know if you guys have another chat Paul, but I think like, really defining the differences between all these different chat bots, like question two was about or maybe question one was about how meta is investing in these chat bots. But there's so many different chat bots, right? There's customer service chat bots, there's unstructured data chat bot, there's a chat bot that looks across our docs, and then there's Zoe, and there's all these different applications. But I don't think the average person really understands how one chat bot differs from another and how the underlying architecture enables that difference. I think that'd be cool for you guys to get into. That wasn't even a great question to go after.


24:32

Speaker 1
Next time we do one, we could do one AI in high risk areas. Like how would you use AI effectively in legal? How would you use AI effectively in data? How do you use these tools when the penalty for getting it wrong is really high? If you're not a lawyer who cites a case that doesn't exist, you're probably losing this week. Yeah, exactly. That happens. That's a really important area that you've got to be able to have some guarantees around how your system is working. And that's why you would use XYZ legal product as opposed to just chat GPT, right? Because you need some guarantees around accuracy.


25:10

Speaker 3
Spoiler alert, just asking chat CPT is not enough. Just like that teacher did, I think in Texas where he went and asked every one of the papers, was this written by Chat? He asked Chat CBD, did this student write this paper with Chat GBT? And the answer was like, yes for all of them.


25:24

Speaker 1
So this teacher bail, mute's hair, class unreal.


25:31

Speaker 2
I think kids are going to have a really difficult time writing on their own in 1015 years. I think maybe there's not as much need for that as these toolings get so good that writing with Chat GPT eat is better than writing on your own. But I don't know I even see it now when sometimes I am writing longer form content and I do have the kind of wedge in my head. I could just be plugging this thing to Chat GPT with some sort of variation to get this end result. And I am constantly fighting that battle. No, don't go to the dark side and do it on your own.


26:08

Speaker 3
But I don't know, I think that's very true and I've seen that happen myself with other domains. Thanks to my iPhone. I'm like a terrible speller now. I've become like a terrible speller. You think it's speller and I'm just awful at it and so lazy. And thanks to Google Maps, it's completely destroyed my sense of direction.


26:27

Speaker 2
I have no internal compass.


26:30

Speaker 3
I still remember getting around myself. Like, I'm old enough to remember a time before Google Maps and I was even driving. Then I just had this internal map of how where I was going, and now it's just completely gone.


26:41

Speaker 1
I think the important distinction is, like, skills that matter versus the skills that don't, right? It's like at some point you learn long division. You could probably dust off some cobwebs and do some long division right now, but it's like you like everyone else in the world. What's out your phone and does calculators are great, but if you don't learn the underlying concepts, if you don't learn what multiplication is, then you're going to have a lot of problems. It's a pretty important skill that's like, I think I was writing it's like, you've got to understand how to think, how to structure thoughts, and if you just never learned division and just use calculators, you'd have a problem. If you never learned how to write anything and you just use Chat DBT, you'd have a problem.


27:21

Speaker 3
It's funny, but used division as the example there. Paul, you referring to the Isaac Asimov story for that?


27:26

Speaker 1
Yes. Okay. Yeah.


27:27

Speaker 3
There's like a famous short story from Isaac Asimov where they watched a guy do long division by hand and everyone else is like, what magic is this guy doing? And it's set in the near future and it's like society has forgotten long division because it's all done by computers. And like, the punchline towards the end is people used to do this by hand before we taught the machines to program machines, which is all just coming full circle with those last points.


27:50

Speaker 1
As well.


27:50

Speaker 3
And he wrote this in like the something like that. It's quite precious.


27:55

Speaker 1
These are a great time. Everyone is very optimistic about what the future is going to be and how fast we would get to advance things. Yeah.


28:05

Speaker 2
I think unlearning the skill of communication, though, is potentially much more harmful than any of these other skills that are learned. And I think if you talk to any screenwriter or author for that matter, in general, they always talk about the only way that you can really improve your writing is through reps. Like, you just have to write a few s***** screenplays and you got to go through that process multiple times to really get better at it. And it's not enough to just understand the underlying rules for writing, you really have to apply it. And as applying it becomes so much more difficult with competitive alternatives like Chat, you could see that make it so.


28:47

Speaker 1
Easy, less people are going to put.


28:48

Speaker 2
In those reps to really understand, I don't know, the nuances of language and communication stuff.


28:55

Speaker 1
No, I think a good parallel is coding, though. There's parts of writing that are really wrote and there's parts of coding that are really wrote. Probably more parts of coding that are wrote than we're writing. But if you use it well, you're creating the thing and it's just like acting as an accelerant. It's like Chat isn't really replacing any ideas I have coding. It's just like accelerate. Like, I need to do something. I need to do three more of that thing with slightly different context and I can now just be like tab to tab. I'm on to the next thing that I'm actually creating. I think it's also just like how you use the tools. If you use them really effectively, then it's like an accelerant to your own creative process and the output is going to be much better if you use it as a replacement.


29:34

Speaker 1
Though. If I was just like Judge GBT, you go do a bunch of stuff on our code base, things would be bad.


29:39

Speaker 2
Yeah.


29:44

Speaker 1
Reminds me of the Silicon Valley episode where Gil foil tasks the AI to order cheeseburgers for the office and then it orders pallets of cheeseburger meat because the reward variant is off and so it's oh, the reward function was off for that. My mistake. But it's already thousands of dollars worth of hamburger meat that they got sent to the office. Sure. It can't quite do everything yet. Yeah, no, I agree with you guys there. Sweet. This is a wrap. You guys are all done.

Want to see how Zenlytic can make sense of all of your data?

Sign up below for a demo.

get a demo

Harness the power of your data

Get a demo