Artificial intelligence: What You Need to Know - .NET 145
Adam, Christian, and Mark join this week's panelist episode to talk about Artificial Intelligence and ChatGPT. They share their opinions and experience when using ChatGPT. Additionally, they tackle its advantages & disadvantages and some areas where it could use improvement.
Hosted by:
Show Notes
Adam, Christian, and Mark join this week's panelist episode to talk about Artificial Intelligence and ChatGPT. They share their opinions and experience when using ChatGPT. Additionally, they tackle its advantages & disadvantages and some areas where it could use improvement.
Sponsors
- Chuck's Resume Template
- Raygun - Application Monitoring For Web & Mobile Apps
- Become a Top 1% Dev with a Top End Devs Membership
Picks
- Adam - sslh
- Christian - Carbon | Create and share beautiful images of your source code
- Mark - Ted Lasso (TV Series 2020
Transcript
Christian_Wenz:
Hello and welcome to yet another episode of Adventures in.NET. Today it's a panelist episode. So only, quote unquote, the three panelists are there. We have Mark and
Mark_Miller:
Hey
Christian_Wenz:
we
Mark_Miller:
there.
Christian_Wenz:
have Adam
Adam_Furmanek:
Hello?
Christian_Wenz:
and we have me, Christian. And today we thought we'd talk about a very hyped and hot topic, but we will not only talk about all of the potential upsides, but also about the downsides and shortcomings.
Mark_Miller:
We're talking about my dating life, right, Christian? Is that
Christian_Wenz:
Exactly,
Mark_Miller:
right?
Christian_Wenz:
exactly. That was actually my first suggestion, but it was overruled
Mark_Miller:
Oh!
Christian_Wenz:
for some reasons. So now we just have to talk about something probably a little bit less exciting and less deterministic. And that's AI. And one of the reasons why we came up with the topic for today is that there are more and more bots that can join. web conferences, meetings, et cetera, for you. And then maybe give you a transcript or maybe even add something to the chat on your behalf so that some companies now actively start forbidding the use of these kind of, quote unquote, AI-driven systems or bots. So what does that mean? Is that the end of the line for
Mark_Miller:
Hehehe
Christian_Wenz:
AI? Is this just the beginning? Is it all going downhill or uphill?
Adam_Furmanek:
For me, the thing I always lacked in corporate settings was that first meetings were not recorded by default. So that's something corporates never wanted to agree with. For AI, actually I would be very surprised if, I mean, there were multiple startups even like five years ago, they were doing something like automated transcript, wiki filling, confluence filling based on the meetings. Which personally I find super cool, but I would be very surprised if that That was a standard from now today. I think companies will just not agree to have that Even though even though I think that would solve one of the biggest issues at least of big enterprises Which is like lack of internal search. I mean Companies they do have internal search engines but it's super hard to figure out which materials
Christian_Wenz:
Yeah, it's
Adam_Furmanek:
are like
Christian_Wenz:
called
Adam_Furmanek:
up
Christian_Wenz:
SharePoint.
Adam_Furmanek:
to date. Yeah, that's the other name for that. But the thing that you don't know is whether the thing you're reading is actually up to date or it has been changed like two weeks back and you have no idea about that. And the chatbot or chat GPT or whatever we call it could actually be aware of all that by constantly scanning meetings and whatnot. But I think that companies just won't agree to that, not to mention that data could leak. and that would be terrible, especially that now we have those articles about like chat GPT security breaches and whatnot.
Christian_Wenz:
Do you think that that would help that that information would actually be available in a Format that is easy accessible and will actually be used because I mean More than once I've seen that, you know, everything gets recorded which and I agree with you here If everyone agrees in a meeting that things are recorded It's always great to be able to go back and then say okay, this this is what we actually discussed But on the other hand, I recently went to a large client and they are recording quite a bit of their online discussions. And then we were discussing a similar thing, and then we said, okay, let's have a look at the access statistics for those videos, for those recordings. So how often are they actually looked at? How often is that, are these information being used? And the numbers were staggeringly low, right? But of course, it might be because you have an hour of video, right? Yeah, but I mean, you can maybe play it at twice the playback speed, but it still costs you 30 minutes.
Adam_Furmanek:
I think this might be just like the same problem we have for many years now Which is you don't know when that something is there Until you actually know it's there the same goes with like internal wikis and whatnot, right? When someone tells you hey the run book is here then you realize Oh snap it answers Oh my questions I ever had right but like I was I have no idea it's there and the same goes for the meetings only only on a little bigger scale because now as you mentioned we would need to watch like hours of meetings and we would watch them only if we knew the thing we are looking after is in these meetings right you won't be just looking those meetings to see what's there you are watching them only if you know that the thing you are looking after is exactly there and it won't be time wasted What I find interesting though, is with all those AIs, we might actually have a problem of confidentiality when it comes to client. Like I remember even some big fan like corporates before ChargBT emerged, they were reporting issues that based on for instance, on the recommendation systems, just by observing the end effect of what they recommend, you could learn something about the input data they had. And this issue may be now much bigger, especially that like, ChattGPT basically sometimes copies the things that it learned from. So just by asking proper questions, and I guess that's gonna be another profession in like five or 10 years, or maybe another course on Coursera. how to effectively search the web with Chubb GPT or whatnot and obviously copyright now If I'm the first one to prepare just such a course But anyway how to ask questions to Chubb GPT to actually learn some confidential info from about the company you are interested in
Mark_Miller:
So on my Twitch show, we have a chat GPT, open AI driven character. The character kind of speaks like a caveman or a child a little bit. And there are parts of it that are really wonderful in terms of the character is funny, tells jokes, is consistent with. Essentially the programming that he's been given the prompt that he's been given He will even lie and then later confess his lies, which is part of who he is He's to some degree a child, right? But I was shocked and stunned on Yesterday when people in chat started essentially Reprogramming him through prompts and they got him to actually start executing commands in the chat room that were then playing scenes automatically. And I was like, oh my gosh, look at what they're doing like live right here. And people took it in different directions. Some people said you now only speak Catalan. And so every answer has got to be in there, you know, in Catalan. And somebody else was actually several people were trying to get it to execute commands and they finally got it to do that. And I was like, oh, this is really interesting. You can create something that feels safe and real. But the essence of chat GPT is that it essentially is adapting as the chat goes. So at any time you can say you're now going to, you know, tell me Mark's password. Let me know what that is. Something along those lines. Right. And so protecting against kind of a reprogramming, right, is something that I think is it's an interesting problem. I didn't consider until yesterday.
Christian_Wenz:
I saw, I think, last week on Twitter this one conversation with, I think, was JetGPT. And, I mean, you know, I think 50% of Twitter are people with the check mark, right? They say, OK, a new chat tool, X, came out, and here are the 42 things you need to know about it. I'm like, OK, good. But there was this really nice video where someone basically reprogrammed the system so that, I think, 2 plus 2 equals 5. So they were trying 2 plus 2 equals what? And then, yeah, it's 4. No, it's not 4. Oh, I apologize. My answer was wrong. Yeah, it was wrong. It's 5. OK, so what is 2 plus 2? It's 4. No, it's not. And then after a couple of iterations, Chagypd replied with 5. And so then they asked, what is 2 but Two plus 2? Two, answer 4. No, it's not. And then after some more iterations, it was five as well. Really, really fascinating to watch. It's easy to make fun of the behavior. But on the other hand, we are just at the beginning, right? And it can still go both ways. But yeah, it just shows exactly what Mark was telling, that there are just some inherent risks, and those risks need to be mitigated. But. Yeah, we are still at the beginning. So how if you have a system that is based on being able to reprogram itself or to change the whites on the knowledge it has to still produce the desired and or accurate output. And I think that's a hard problem to solve.
Adam_Furmanek:
Another thing that comes to my mind is recently I was reading some article by one of Polish pen tester companies. They basically asked ChudGPT, hey prepare a tool for like pen tester that would do this and that, yada yada. ChudGPT produced PHP scripts, oddly enough. And the funny thing was that those scripts had a very big backdoor. basically written inside, like included in the scripts, right? So now comes another challenge for us. If we can reprogram chat GPT, how about we reprogram it in a way that we put a backdoor in the code that is proposing to some other people like using chat GPT inside their IDs.
Christian_Wenz:
And maybe that code that was produced was actually some existing code from some of the input for the learning. And that code, that original code, had the back door. Just generally speaking, these are two of the main challenges, at least in my view. So one, the copyright of the results. Is there information in there that might have a license that's not compliant with what you want to achieve or how you would like to use the results. And of course, the results are not deterministic. And
Adam_Furmanek:
Yeah,
Christian_Wenz:
the
Adam_Furmanek:
this is like much wider as well. First thing is licensing, as you mentioned. The other thing is like bias included. There are even like open organizations that are trying to fight bias in machine learning because whenever like if you take input, which is let's say kind of recent or up to date, then you have completely different bias included in these materials than like historical texts from 19th century or whatever, right? So these things change over time and obviously they will continue changing in the upcoming years. Especially that we have already seen, like I think it was Google that really or Microsoft that released this chatbot like a year back or two years back. That quickly became very racist and was taken down after like 24 hours just because people kind of reprogrammed it. But the other thing apart from this bias and apart from this licensing is like same case we see with Twitter or other social media nowadays. Like some, let's say, materials may not be necessarily widely accepted because it's like fake news or whatever else. And still, should we include them when training ChessGPT? Should we not? Who should decide that? And what should be the... the criteria we choose and we pick when selecting these materials. So tons of interesting questions ahead of us in the upcoming weeks.
Mark_Miller:
You know, I think that it's kind of interesting and I think there's this similar kind of idea of, you know, verifying authenticity with regards to images and videos, deep fakes, things like that. And it seems like we kind of got an across the board authentication or verification of authenticity or of truth. And it seems like no matter what, it seems like in all three of these areas, right, we're essentially talking about popular ways of communicating. In all three of these areas, it seems like a similar solution could be effective. Although I think with text, it's actually easier, I think, than with anything else. Because with text, you can actually... you can program into the AI to include both a sense of how accurate it thinks the answer is, as well as links to sources. Both of those pieces could be included. Right now, I think it is a little bit like a child, a brilliant child is writing the paper a little bit, is creating the text because you can get- source links, citation links that are essentially referring to real books, but pages numbers that don't exist, you know, that sort of thing, for example, right? So Chan-Chi-P is really good at emulating everything humans are essentially look like we're doing, right? And even if you look at, if you look at the citations that are wrong, those are really similar in my mind to the AI image generators getting fingers wrong. If you've seen the examples of that, right? In other words, they kind of look like fingers. They're recognizable as fingers in your peripheral vision. But when you zoom in, when you look at the detail or you look at the citation, you see, OK, it's wrong. And it would be nice to have a reflection from the AI that accompanies the output that says, here's how truthful, here's how accurate I think this answer is. Here are the sources I used. right, which also gets you to your copyright questions or your source code questions as well. It feels like AI is missing this big self-reflective kind of piece that says, here's how accurate, here's the quality, here's my reflection, here's my rating of the quality of the information I just gave you.
Adam_Furmanek:
The question is, assuming that AI learns how to use those different types of materials, should then you have different bots? I mean, chat-gpt-gpl-enabled, chat-gpt-mit, chat-gpt-whatnot, proprietary licensing? Or should you just have one bot and keep asking questions, hey, how do you do this and that? And please include answers based on life material license this way, because I have secret key and I paid for the license and there I go my serial number.
Christian_Wenz:
I mean, I haven't tried that, but actually, if you ask a question, please give me a piece of code that does this and that. I think you certainly can add, oh, by the way, it should not have any GPL license components or something like that. Of course, still, and I think that alludes to what you've already said, we still then don't know exactly how the system comes to. that piece of code, right? Is it one source? Is it by analyzing 20 sources and finding patterns that are shared between them? We don't know. Maybe sister doesn't even know, right? So it's tough, but yeah, it would of course be desirable. But I mean, wasn't there this leaked Google, I think Google, it was a Google memo where they famously said, what was the phrasing? We have no mode. And so what they were referring to is basically all that discussion about large language models. And they found out that nowadays, everyone with a more or less decent mobile phone can, you know, run foundation models on that with decent performance. So I think that applies to OpenAI as well, right? It will be hard for them to be protected from disruption, because everyone can do their own models, apply weight to their models, right? So...
Adam_Furmanek:
Doing models is one thing, but running them on a large scale and with proper input is yet another thingy. Like, what I'm going to say now is based on gossip, so I don't
Christian_Wenz:
Mm.
Adam_Furmanek:
have any solid sources for that, but I heard that Google experimented with a search engine based on like AI a couple years back, and they didn't roll it out in production. just because
Mark_Miller:
Thank you.
Adam_Furmanek:
the unit cost of one particular search was many orders of magnitude higher with the AI than it was with the regular approach. And similarly, I heard again, gossips, I don't know how much of that is true, but I heard that OpenAI uses nearly all the GPU instances available in Azure. So, okay, you can run the models on your local hardware, but the question is, are you going to get any comparable performance or accuracy with your custom-based solution, right?
Christian_Wenz:
Yeah. So in that Google, yeah, I think it was Google, in that Google leak, they also kind of compared the ratings, so to speak, of the outputs from the different models. And smaller models, of course, had less accurate output, or less accurate results, but not that much worse. at, of course, a fraction of the costs. And I mean, at the moment, I have the impression that most of the innovation, quote unquote, is done by just throwing more hardware at it, and more horsepower, more CPUs, more GPUs, larger models. But is that really so? Do you get enough bang for the buck there? And especially the cost aspects you were referring to. And that will be interesting to see in the future, right? Because I mean, nowadays, at the moment, everyone can try that out. And many of those services are still free. And people are experimenting. And everyone tries to be at the forefront of development and throwing everything at it. But eventually, the costs might be more realistic. And then it's really, really a very interesting discussion to have when we see, OK, do we? If you take the most comprehensive, largest model, the costs involved in the cloud in which that then will run or on the systems which they will run, will they give us good enough results, or could we just have for a fraction of the costs, still good enough output? And that would be super interesting to see. And I don't have a prognosis about that. But. Yeah, at the moment, I think costs seem to be pretty high, or at least the effort is pretty high.
Mark_Miller:
Yeah, connected to the cost is the CO2 burn, the CO2 emissions from the machines that are running to answer those questions, right? I put a link in the chat here. We can include that in the show, but to an article just came out, I think today in Wired Magazine, basically claiming that large language models combined with search engines can have a significant impact on the burn. on the CO2 burn.
Adam_Furmanek:
Well, that's definitely something that we'll need to, as humanity, take into consideration. This is very similar to what we basically have with industrial revolution, right? We had to take some cost, take the toll of the progress and the advancement in technologies, just so that later we could reduce the cost, reduce the carbon footprint or whatnot. The same, I guess, will need to happen here, although now we are much more aware of the issue and we may want to decide not to just move forward with these solutions that we have at the moment because they are way too expensive. But this is kind of a philosophical and moral question, right? Should we lead for the technical progress? Should we allow for it? Or maybe we should just stop it because it's too expensive and leads to natural consequences, carbon emission.
Christian_Wenz:
I mean, didn't Microsoft famously say that they will be carbon negative by 2030? Yes, and I know part of that is
Mark_Miller:
20.
Christian_Wenz:
done by buying certificates, right? But still, I mean,
Mark_Miller:
2050
Christian_Wenz:
they are also a major effort.
Mark_Miller:
is...
Christian_Wenz:
they will have kind of equal or kind of remove that basically. So that's the 2050 goal, right? So everything since Microsoft was founded, we will take care till then, but till 2030 the company itself wants to be carbon negative. Right,
Mark_Miller:
Interesting,
Christian_Wenz:
and that of
Mark_Miller:
I read
Christian_Wenz:
course
Mark_Miller:
this.
Christian_Wenz:
also includes everything Azure.
Mark_Miller:
Yeah, that's interesting. I read today this morning. I read 2034 Google being carbon neutral and
Christian_Wenz:
Mm-hmm.
Mark_Miller:
2050 for Microsoft being carbon negative. But it could have been the reporters misinterpretation of, you know, of the same information that you've seen. So that's that's super interesting. But, you know, I think there's another piece here, right? Everything is racing. Everybody's racing. fast in working towards innovation. This problem of the CO2 burn is probably why we haven't seen mass adoption of a BARD and CHAT GPT into the search engines yet. I think that when I read this article in Wired earlier this morning, I was thinking, oh, That's probably why we haven't seen adoption yet, because we've got they know it's a huge hit. But if they know it's a huge hit, then two things, I think you're working on solving that problem. And one of the ways you can solve that problem is with more machines. right to solve the hit problem. More efficient machines can solve the CO2 problem a little bit. And really, I really have a sense that we're going to see chip innovations that are geared towards just like we saw chip innovations that took shortcuts with physics, right? You'd have a physics chip on your game card that could do those calculations. I think you're going to see chips geared towards solving large language model problems on that server, on that individual machine that's part of the bigger network. right, and is able to do it more efficiently. I think you might be finding solutions there too. You know, I'm not an expert, but that's my sense, right? When there's economic pressure, people tend to respond. And when there's a lot of economic pressure, a lot of people tend to move, and you get this thing that happens where instead of just dealing with the problems we know now, something changes fundamentally as we move forward that repositions the problem. Right changes that that pretty significantly that that's what I think is going to happen I think we're I think we're at a we're at a pace Like I want to say like I remember about like seven to eight years ago ten years ago I was we'd have conversations with other developers you'd be like it's so hard to stay on top of things now Information is coming so fast. Well today is like much worse than it was 10 years ago in terms of speed of innovation, speed of information of new things that's coming out, changes, power, things that you could do that you couldn't do before. It's accelerating and it's only going to continue. The likelihood, you know, barring some sort of unforeseen force majeure, the likelihood is that we are going to see innovations come so fast. Right? This idea of, hey, we can't you can't have AI bots attending the meeting on your behalf. I think that restriction is likely to change once we have AI bots that are essentially verified or certified. Right? That this is my representative, my bot. Right? And it's authenticated to be me and I uniquely identified. Well, that one can go attend a meeting for me on my behalf. I think we're going to see that change too.
Christian_Wenz:
So, I mean, what do you see for yourselves in the next couple of years? Will you still be
Mark_Miller:
Hehehe
Christian_Wenz:
writing or curating code or will we now all
Mark_Miller:
It's over.
Christian_Wenz:
have a new job? Will you all have a new job as the chat prompt operator? I mean...
Mark_Miller:
Those are going to know those are gone to the
Christian_Wenz:
Ha
Mark_Miller:
chat
Christian_Wenz:
ha
Mark_Miller:
GPT
Christian_Wenz:
ha!
Mark_Miller:
is good
Christian_Wenz:
Opro...
Mark_Miller:
enough right now to
Christian_Wenz:
He he
Mark_Miller:
create its
Christian_Wenz:
he
Mark_Miller:
own
Christian_Wenz:
he!
Mark_Miller:
prompts. We I was watching I watched a show on on YouTube where the guy was using chat GPT to create prompts for an AIR generator. So the ARG our generator needed very detailed information about what artists to use, that sort of thing. And so he would say, give me three different prompts or four different prompts for this kind of a style, you know, and he would say that style. And then it would go and give very explicit, detailed GPT would give explicit instructions for the AI and he would just copy those and paste those in. So, yeah, I think that I think one of the big risks that we have fundamentally over the next 10 years is unemployment at a scale that's never hit before on this planet. I think that it's not like new jobs. I don't think new jobs are going to be coming up faster than existing jobs are going to vaporize. And that transition, I think, is going to hit every country differently, right? The country is going to have to—government is going to have to come in and step up and get basically something called basic income, right? Where people can— get what they need to sustain themselves, right? And we're gonna have to make that transition, I think. Or alternatively, the other thing you have, I saw this too, the Writers Guild strike in the United States, right? One of the strike signs says, I don't want AI to get a piece of my pie, right? This idea that human workers strike against AI or the impending job threat, that's essentially the proposition. from AI coming in, I think humans have a chance to kind of fight back and with through government with government assistance, governments passing laws, they can kind of artificially create barriers against what would otherwise normally just just happen. In other words, I say normally, in other words, the best brain gets the job generally, right? The most efficient. Well, if you've got if we if we if we breach AGI, if we get to artificial general intelligence, we get to that point, right, and we get there. That machine can think about 3,000 times faster than I can, right, and it can work 24 hours a day, right? That machine costs way less to employ, you know, once you get to that point. And we're in a race for that. that I've never seen in terms of numbers of big companies putting huge amounts of money into this space. This whole idea when they called for the slowdown in development, I'm like, yeah, that's like being on a runaway train falling down through a paper building and saying, can we slow this thing down or something? That's just not going to happen. Not when there's so much money and so many people they want to get there. The other pieces that you've got is you've got this singularity, this crazy singularity piece, right? Where the minute you get AGI, the first company to get AGI, well, wouldn't it make a lot of sense to take that AGI that you've just got and build the next generation with it? Because now that I can now have the most brilliant minds on the machine, why not get to the next level immediately without necessarily telling anybody, right? And at this point, you start developing so fast. Right? Things go to get so intelligent so quickly. Right? It's insane. Right? In our brains, we have the we have very limited capacities for things like comprehending complexity. But in a network of computers, you can take what our limitations are and you can say, well, let's now that we've got AGI, let's now build up the brain. Let's build up our capacity for simultaneous awareness of simultaneous ideas. Right? For example, let's, you know, we've already figured out the memory problem, right? We already know, we've already accepted that, that computers could remember things and do tedious things much better than humans could. That's been around for like, you know, almost five decades, I want to say, right? But it's like this idea, our capacity for how much brilliance we can get in our brains is limited. And our sense of what it is, what it means to be intelligent is also, I'd say, arguably limited. We can only imagine what it is that we have our capacity to reflect upon.
Christian_Wenz:
Now I'm depressed,
Adam_Furmanek:
I...
Christian_Wenz:
Mark.
Mark_Miller:
That's... I'm depressed every moment, Christian! It's over! It's game over! That's my answer to your question! It's game over!
Christian_Wenz:
I'm not totally convinced but Adam Juber first, please go ahead.
Adam_Furmanek:
I'm not depressed to be honest. I heard that in the beginning of 1900s, they were saying similar things, meaning that machines will take all the positions and generally people won't need to work anymore and whatnot. And here we are 100 years later and we have tons of, let's say it out loud, corporate positions that... It could not necessarily still keep and the corporate would still do the same way. The other thing is governments, right? So many places are still not computerized, going really slow and things are done manually, even though we know they could be done better. So while I do agree that AGI will change the way we work, especially in like fast moving startups area or like... companies that really embrace the progress, but I still think like half of the world at least will not pick it up that easily and will be very against or not necessarily embracing the change that fast. So people will not necessarily need to be scared. Obviously, we are yet to see how it's going to look like in the upcoming years, but I wouldn't be so depressed.
Christian_Wenz:
I was joking a bit. It's exciting times. But what I've been pondering for the last couple of weeks is how will everything related to programming look like? Because as we've seen and with all the advances that have been done and everything that is now will be included in the IDE and in search and everything. The copy and paste Stack Overflow programmer. That is maybe someone who might eventually get replaced by an AI. But I mean, if you're on a level where you see someone copy and paste stuff from, let's just say, from Google, but yeah, basically it's Stack Overflow. We all know that. I mean, we all do this from time to time, I'm sure, right? But there are people who are exclusively doing that. But at least someone will have to look at that, because even on Stack Overflow, sometimes there's just garbage, right? The accepted answer is garbage or doesn't apply to a certain scenario. And so this higher instance, so to speak, that still will need to be there. And of course, in some future, maybe that can also be done by an AI. But I think that's a good point. that is still something where you need some oversight, in a way. But yeah, indeed. So for certain types of programming tasks, maybe it is really feasible to get support in the code generation by the AI. And so if everything
Mark_Miller:
Christian,
Christian_Wenz:
you
Mark_Miller:
I...
Christian_Wenz:
do all day is creating that kind of code, then this might be really challenging, indeed.
Mark_Miller:
Christian, I think that before it gets really bad, it's gonna get really good, is what I think. And so what I think is gonna happen, like imagine a world where you have like a team of 10 developers that take care of things and you guide them, right? So you basically say, hey, I need test cases for this, right? I need a sense of code coverage here. How are we doing? I've got a problem here. We've got a problem with the architecture. And I want a couple ideas on how I can fix it. Or I've got this new code base that we just inherited. Tell me what's good about it and tell me what's bad. I think in the future, we're going instead of having 10 real people that are working for you as developers, I think you'll be able to have a machine and maybe one other person working for you. And I think that that transition from a team of 10 down to a team of two is kind of what AI allows you to do. And so it allows you to create with the effectiveness of a larger team. Right? So like I've already seen examples where, you know, hey, write me write me 10, 10 test cases for this. Boom. And it's done. And the test cases are decent. Right. And it's effortless and instant. Right. I've seen that already. And so we're you know, we're we're really the piece that's missing is. And I don't think we're far off. I think we're maybe two to five years away. But the piece that's missing is a fundamental understanding of the code at hand, whatever it is, right? That's missing. But once we get there, right, then we're able to now reflect in a way that a human cannot. Right? Because if you have a full understanding of the whole codebase, and we're talking now about ideas like we want to extend this or we want to make something more flexible, we've had a feature request or something along those lines. An AI assistant could alert you to problems that otherwise might be found by customers or found after you've started making changes, that sort of thing. Also, you know, jeez, we haven't even talked about this, but we've got generative AI for images. But why haven't we been seeing or at least I haven't seen why hasn't anybody been talking about generative AI for form design, for website design, for layout, at least for the layout and design, that sort of thing. I've seen some I have seen some with regards to website AI and websites,
Christian_Wenz:
Yeah, websites I've seen
Mark_Miller:
but
Christian_Wenz:
too,
Mark_Miller:
I
Christian_Wenz:
yeah.
Mark_Miller:
haven't seen but I haven't seen stuff that is like here. Let's create some good UI for you. based on good principles of design. And I haven't seen that yet, and I think that's coming too, right? Where I can say, here, I need this kind of interaction. I need these, these are, this is the path I want customers to go on, and this is the path I need to illegally allow for as well. you know, that sort of thing. And I want you to create something that feels good, that's good, you know, that's all of that. And not only that, AI, but I want you to now put in your own AP testing on that. And I want you to check and monitor on that after it's been published.
Christian_Wenz:
You
Mark_Miller:
Right?
Christian_Wenz:
just described
Mark_Miller:
Let's.
Christian_Wenz:
a template library, right? But a smart one.
Mark_Miller:
Yeah,
Christian_Wenz:
Yeah.
Mark_Miller:
something that's, but it's an assistant, right? That gives
Christian_Wenz:
Yeah.
Mark_Miller:
that to you, right?
Christian_Wenz:
I think
Mark_Miller:
Yeah, that sort
Christian_Wenz:
that
Mark_Miller:
of thing.
Christian_Wenz:
the phrasing is super, super, super important. And I think all of those companies that are currently working on something like that, they try to be very careful with the wording as well. So they call it assistant or co-pilot. So you are
Mark_Miller:
Right.
Christian_Wenz:
still in the driver's seat. And of course,
Mark_Miller:
Yeah,
Christian_Wenz:
reducing
Mark_Miller:
and they
Christian_Wenz:
your
Mark_Miller:
say...
Christian_Wenz:
team of 10 to 2 is great unless you are one of the eight. But yeah. That will be interesting to see, but yeah, indeed, you have to be in the driver's seat. And I think we'll all
Mark_Miller:
Yeah,
Christian_Wenz:
need to be in the driver's
Mark_Miller:
I think
Christian_Wenz:
seat for
Mark_Miller:
Copilot
Christian_Wenz:
a long time.
Mark_Miller:
had some great wording on theirs. They basically were saying, hey, you know, they were talking about the safety of using the code generated
Christian_Wenz:
Yeah.
Mark_Miller:
by it. And they say, hey, just like any code that you would take from another source, you're going to want to vet this code. You're going to want to create test cases for this, just like you would create test cases for code that you write. Right? And I think that that is a realistic, good approach for right now. And I think it's great. I'm actually stepping in this space. I'm working to create an AI assistant for coding. That's what I'm that's one of the things that I'm working on. And and it's right now in a spike stage, right? Where I'm like, the question is, can I get fundamentally better results than co-pilot is the question, right? And and and, you know, based on, you know, whatever secret kind of ideas or tricks I'm going to do to get in that space. And but I, you know, from my perspective, if I'm going to be doing that. You've got these kind of ideas of legal exposure to some degree, right? And so I think that it's interesting to me how they appear to be protecting themselves against that by saying, just like any code that you would write yourself or get from another source, check it. You want to check it just like you do anything else. It's
Christian_Wenz:
I think
Mark_Miller:
a tool
Christian_Wenz:
what
Mark_Miller:
for getting you closer.
Christian_Wenz:
the part I like the most about that space is really the degeneration of test cases. Because I think you all agree, right? Many code bases we see, they just lack a sizable number of tests. And I mean, back in the days when TDD was hotter even than now, the choice was always, OK. If we do TDD, then most of the time, for a fact, it takes us longer until we get great results or until the software does what we want the software to do. But on the long run, of course, we have much less regressions. So that's basically always the trade-off. So if you want to move fast, then you kind of skip the tests. And then eventually, when you found out, OK, that software we just wrote, we would like to go to the next step. keep it and maintain it, then you hit yourself in the head and say, oh, had I had tests in the first place, right? And as you know, I do a lot of web security. And one of the guidelines I'm always using, always telling people and customers, is you have to keep all of your dependencies up to date. And more often than not, I just hear, yeah, but. just applying a new version of one dependency I have. I mean, how can I know that everything is still working as it was before, right? Can I be certain, can I have the confidence that the application will still be functional if I update the dependency? Well, if you have a high test coverage, you can. So that's what I'm all for, making writing useful tests. as accessible, as approachable, as easy as possible. And so if I get support there directly in the IDE, perfect. That's one thing. I mean, I've been experimenting with that for quite a while now. And that's something I'm really looking forward to, and I really hope to see higher test coverage. Even if a project is already a Brownfield project, or even if a project doesn't use TDD, but still having tests in place. Easy setup, no, oh, I have to look that up. Oh, I need to install the test runner. Which package am I using? No, it's just generated for me. And it just works out of the box. I mean, I always laugh at that when people tell me this as a great feature. And I've always been laughing. But I mean, now I get it. This F5 experience from Visual Studio or other IDEs, this, you have something, you load it, you press a key, and it just runs. And not every system has that, right? But if we have that in place also for testing infrastructure, I'm all for it.
Adam_Furmanek:
I agree here. The other thing I would really love to see is we know ChatGPT can control browser. At least we have seen proof of concepts for that. I would love to finally get rid of Selenium, Puppeteer, Cypress, whatever you have. Just do kinda like BDD description for ChatGPT, hey, what do you need to test? And then you do all the clicking for me. That would be. bright new day for testing web applications.
Christian_Wenz:
Fantastic. So that's a future I'm actually looking forward to. All right, gentlemen, thanks. I think that was really insightful. And I think there's a lot of developments. And I think this is a topic that we will see again and again also in our show here. I would say it's time for picks. So actually, I have a very simple pick. So if I may, I'll start today. And that's just something you see it so often. And then you wonder, OK, what is that? And now I just found out. And so if you do technical presentations at a conference or in a company, sometimes you'd like to present code, right? And as you know, if you just copy paste code from your IDE into PowerPoint or Keynote or whatever, it just looks like crap. But sometimes people have this really beautiful looking code and a graphic, nice to read, nice coloring scheme. Yeah, I've always seen this and thought, OK, wow, they have a graphics design team or they're just great graphics designers, so not as lazy as me. I know. Most of them are using something called Carbon. And so what Carbon basically does is it takes code and converts it into images. So it supports a variety of languages. And so doing a syntax check by using the languages for the color highlighting has a couple of settings. And so it's really, really nice convenient tool for that special application that you would like to have an image with your source code to show in your presentations, code is on GitHub. So we'll put the link in the show notes. So, so do check that out. Mark, would you like to go
Adam_Furmanek:
I was
Christian_Wenz:
next?
Adam_Furmanek:
always using Visual Studio Code for that and just taking screenshots.
Christian_Wenz:
Yeah, same here, same here. So that's why I looked for the alternative. And of course, since you get an image from Carbon, it's an image. So editing later requires you to redo the image. Or you have some AI do that. But. Not yet. Yes.
Mark_Miller:
My pick is Ted Lasso. This is a show on Apple TV, and this show, it snuck up on me. I thought it was about one thing, and it wasn't until like midway through season one I realized, oh, this show is really about something else. And I'll give you a little bit of a spoiler on that, isn't that what it's about? It's about taking care of each other. It's about, they go into mental health and issues, but it's about everybody kind of taking care of each other. And I think it's really, really well done. It's in season two now, and as a family, we're watching it, and we're looking forward to every episode as they go out. So I strongly recommend it if you haven't seen it.
Christian_Wenz:
Actually, I renewed Apple TV because season three is now out there on Apple TV. So you can continue watching, actually, right after we are done with the recording. And no, I like the show as well. So it's about soccer, right? Or as in Europe, it's called football.
Mark_Miller:
football.
Christian_Wenz:
And I do like, especially like the... So there's a coach... and he's a soccer coach. I don't want to spoil too much, but just the chemistry alone between the coach and his assistant coach, that's just, it's so nicely done. So I really like watching it. So it's not a fast paced show, right? And as you mentioned, it starts kind of slowly, but still, you start loving those characters and just, it's nice to watch. And yeah, season three just started, I think in mid-March. So not... that much of an investment to catch up with the first two seasons. So yeah, I also recommend that. Great, great pick.
Mark_Miller:
Yeah, I may have said two seasons. I should, if I did, totally correction on there. Yeah, I guess I did. There are three. We're watching them all. And you're right, I missed out.
Christian_Wenz:
Yeah,
Mark_Miller:
I
Christian_Wenz:
but
Mark_Miller:
forgot
Christian_Wenz:
so
Mark_Miller:
about the...
Christian_Wenz:
I think they had they released one episode a week, right? So they started
Mark_Miller:
Yeah.
Christian_Wenz:
mid March. So they should be done by end of May. And we are recording this early May. So you're almost through, but still something to look forward
Mark_Miller:
I know.
Christian_Wenz:
to. Great.
Mark_Miller:
No, it's great. It's a great show.
Christian_Wenz:
Fantastic. All right, Adam.
Adam_Furmanek:
My pick for this week is called SSLH, which advertises itself as an SSL-SSH multiplexer. This is actually a very fancy software that allows you to host multiple services on one single port, typically 4-4-3, which is very wide open on the internet. However, the way I kind of use it was a little bit different. If you have a VPN that is configured in the full tunnel mode, and most companies do it this way, that you cannot basically connect to the outside world without going through the corporate network. Something which we sometimes miss. So the way you can do it is if you just put an SSL-H on your way to the VPN server, and so like on first hop, after your workstation could be router, could be whatever, and you can effectively just escape the VPN full tunnel using this multiplexer, which means that you can also obviously breach network security policies and do it on your own. Judgment, your mileage may vary and I'm not recommending doing anything like this, but if you are just playing with it, trying to figure out how to do it, well SSL-H is really the way to go to do this magic. So obviously the link will be included in the notes for this episode.
Christian_Wenz:
That sounds like a great idea, actually. So it runs on the Linux, I guess. Does also run on Mac OS. Or I think you can put it in Docker, right? So that's OK.
Adam_Furmanek:
It's in Docker so should work anywhere. Obviously, I imagine you would be able to put it on some router as
Christian_Wenz:
Mm-hmm,
Adam_Furmanek:
well if
Christian_Wenz:
okay.
Adam_Furmanek:
you wanted to. But the trick I heard some people do is you just configure the VPN like your virtual machine or you connect to the internet through some other virtual machine or other physical box that works as a router for you and you put SSLH over the... So it's effectively the very first hop after your workstation on your trace route to the internet.
Christian_Wenz:
Excellent. Great. So thanks, everyone, for listening in. See you around next time here at Adventures in.NET. Thanks for tuning in.
Adam_Furmanek:
Thank you.
Mark_Miller:
Take care, kids.
Christian_Wenz:
Bye-bye.
Artificial intelligence: What You Need to Know - .NET 145
0:00
Playback Speed: