Stories From The Trenches - JSJ 556

In this episode the panelists share war stories from their career and the lessons they have learned from them. They discuss things they have done back in their early days in tech, and how they now behave differently given those experiences.

Show Notes

In this episode the panelists share war stories from their career and the lessons they have learned from them. They discuss things they have done back in their early days in tech, and how they now behave differently given those experiences.

On YouTube

Sponsors


Picks

Transcript

Steve:
Hello everybody and welcome to another episode of JavaScript Jabber. I am Steve Edwards, the host with the face for radio and the voice for being a mime, but I'm still your host again.
 
Dan_Shappir:
Thanks
 
Steve:
With
 
Dan_Shappir:
for watching!
 
Steve:
me today, I have two people on our panel. Uh, let's first welcome Dan Shapir coming all the way from Tel Aviv.
 
Dan_Shappir:
Hey hey, nice to be here.
 
Steve:
And of course, in the purple room, AJ O'Neil. How you doing, AJ?
 
Aj:
Yo, yo, yo coming at you live from lower back pain.
 
Steve:
lower back pain. That sounds like a place I don't wanna be, although I'm there frequently, it seems.
 
Dan_Shappir:
Ha! Well...
 
Steve:
So today we are panelist only, no guest. Our guest today had to reschedule. So we have decided that we are going to talk about war stories, not real war, just stories, things that have come up over the years as we've worked in technology and learned things and what to do and what not to do. So we each have at least one story that we're gonna share that hopefully will enlighten you or not to do depending on the situation. So we'll start out with Dan. What do you got for us Dan?
 
Dan_Shappir:
Okay, so it's an interesting story from a while back. I'm trying to figure out, kind of remember exactly when it happened. I'm thinking something like 15 years. It was a past employer. And it kind of, the lesson that it taught me was to never assume when you're setting out to improve something, especially performance, that you're supposed to always measure and make sure that where you're optimizing or improving is actually the real bottleneck and not something that you kind of assume is the actual bottleneck. So it was an interesting project. It was actually nothing having directly to do with the web. Well, actually it kind of was. build legacy systems like think even mainframes, be accessed from modern systems. For example, let's say you want to build a web interface in front of a mainframe. So a lot of these legacy systems don't have built-in APIs that you can use like web services or stuff like that, Instead, more or less, the only interface that they have is like a console. So it might be like something that emulates some sort of a terminal. Think like one of those IBM terminals. So a lot of banks and insurance companies often still have these types of legacy systems and it's really difficult to replace them. you're able to do that, what you do instead is you kind of put a facade in front of them that enables you to automate the access to them. So I worked with a company that created such systems, at least back at that time, and it would be kind of this sort of a middleware server that had a web service interface on using some sort of a legacy terminal protocol on the other end. So you could make a web service request, let's say, to either get information from that legacy system. Effectively, getting the information is if you were a terminal, reading the content out of the text returned and sending this information down as a response to that web service. alternatively even enter information into the system. So you do a form post and actually fill in fields that you send through that terminal interface into that legacy system. So that's how you could both read and write information out of those types of systems. And like I said, then you could, let's say, build a web interface
 
Aj:
So we're talking
 
Dan_Shappir:
to
 
Aj:
about
 
Dan_Shappir:
replace
 
Aj:
bod here.
 
Dan_Shappir:
that
 
Aj:
How
 
Dan_Shappir:
legacy
 
Aj:
many
 
Dan_Shappir:
terminal.
 
Aj:
7-bit bytes
 
Dan_Shappir:
Is
 
Aj:
you
 
Dan_Shappir:
that
 
Aj:
can send
 
Dan_Shappir:
kind
 
Aj:
per
 
Dan_Shappir:
of clear
 
Aj:
second,
 
Dan_Shappir:
so
 
Aj:
right?
 
Dan_Shappir:
far? Well, you know, it's not really such an issue because at the end of the day, your middleware is probably residing on the back end next to that legacy system. So bandwidth is not an issue there. It's more... Think about it this way. Let's say you want to enable people to see their bank balance and that bank is to host all the customer accounts. And the only way to easily access that information is through terminal emulation. So it's kind of an automated facade in front
 
Aj:
So
 
Dan_Shappir:
of a terminal emulator.
 
Aj:
what I'm, what I'm
 
Dan_Shappir:
So, I
 
Aj:
questioning
 
Dan_Shappir:
think that's
 
Aj:
is,
 
Dan_Shappir:
a good point. I think that's a good point.
 
Aj:
is
 
Dan_Shappir:
I
 
Aj:
it,
 
Dan_Shappir:
think
 
Aj:
are
 
Dan_Shappir:
that's
 
Aj:
we talking
 
Dan_Shappir:
a good point.
 
Aj:
serial
 
Dan_Shappir:
I think that's a good
 
Aj:
console
 
Dan_Shappir:
point. I think that's a good point. I think
 
Aj:
terminal?
 
Dan_Shappir:
that's a
 
Aj:
Like,
 
Dan_Shappir:
good point.
 
Aj:
yeah. Okay. So like the kind of thing where you have to, you have to specify what bod you want to use
 
Dan_Shappir:
Yes.
 
Aj:
for the serial interface to be able to interact with the terminal. Okay.
 
Dan_Shappir:
Well, it might be even other protocols like Token Ring or whatever, but let's not even go there. These are legacy systems. And the company that I worked with had originally implemented such a middleware server to enable to create web services on top of those legacy systems using C++ on Windows. They use the direct Windows API, like Socket API, and stuff like that. And then we said, yeah, but we would like to be able to host that middleware on various other server platforms, like, you know, let's say a Linux server instead. So instead of rewriting it in C++ using different APIs, the idea was let's just do it in Java. I mean, Java is great on the backend, you know, in a middleware type server. And Java has all the socket and whatnot and thread pooling and whatever, so you could build such a middleware server in a fairly straightforward sort of a way. And so project was undertaken to implement it using Java and it worked, but it was much, much slower than the C++ version of that middleware server, like 10 times slower. programmers that worked on it kind of said, well, you know, it's Java, it's JVM, it's garbage collected versus C++, which is native. That's probably the reason. And I didn't accept that answer because it seemed to me that at the end of the day, this is just, you know, translation between formats. And, you know, there shouldn't be a reason that Java would be that slow, that much slower than a C++ version. Basically, I wasn't familiar with the project, but I had the basic understanding of how it worked. So basically, I just downloaded all the source code down to my computer, opened it in the whatever development environment I was using back then, I don't even remember, started to go over the code and looked for bad code, code that I thought was inefficient. found plenty of code that was inefficient. So I did a lot of various improvements in the code. And it seemed to me that I must have solved the problem. So I pushed my changes, built a system, tested it out, and guess what? I did make an improvement of about 1%. Yeah. turned out that I did in fact optimize a lot of things, but I didn't optimize things that were on the critical path or I didn't optimize the actual real bottlenecks. If you take something that at the end of the day only consumes, let's say 5% of the total runtime and improve that 10 fold, you've only improved runtime, for something percent, that's like kind of what I had achieved. I had in fact made some significant improvements to the code, but I didn't improve the code that was the actual bottleneck of the system. And the reason was that, like I said, I went based on intuition rather than based on actually measuring. use a profiler. For those of you who don't know, a profiler is a tool where you can actually run scenarios and then measure where the application actually spends its time. There are various ways in which such profilers can work. They can either instrument the actual code so that every time they enter or exit a function in the code, it actually creates a timestamp so you know exactly which functions are called in what order and how long each function actually takes to run. That's one way. Another way is to do this sort of a polling where it like, let's say every millisecond, it actually just goes and checks the call stack and sees where in the application the execution is actually taking place. Either way, what you're actually doing is measuring what parts of the application actually run when and how long each and every one takes. So you can literally drill down and see how long every function in the code that actually gets executed takes to run. And by the way, this type of profiler exists for JavaScript, for web development. the performance tab, which has exactly that type of functionality. And consequently, I use it today quite often when I'm trying to optimize or even just understand how a particular web application actually behaves. It's a really great tool when you're kind of coming to grips with an application that you're not familiar with, let's say, operates in real world scenarios. So anyway, I profiled it and it turns out that indeed, the bottleneck was totally not where I expected. I thought that the bottleneck must be in the code that does the transformations on the data, that takes the data that's received through the web service and then transforms it so as to send it to the legacy system legacy system and then transforms them in order to send them back to the web service because ultimately that was the core functionality in that middleware. But it turns out that the bottleneck was somewhere totally different. It turns out that the API that was used on the web services side, because we were talking 15 years back, it wasn't JSON, it was XML. The code that parsed the XML was very inefficient. It turns out that the way that the XML was parsed is that instead of using some sort of stream parsing, it constructed the entire DOM object for the XML. So the way that it worked turned out was that due to, you know how it is that sometimes when you implement some code and you're trying to do this kind of a separation of concerns or encapsulation, and you do it at the wrong level? they did when they implemented it, they would pass that XML down into a component, it would parse that XML to get the data that it needed, and then would throw that parsed DOM away because it had its data. And then that XML would be passed to some other component in the system, which would do
 
Aj:
How did
 
Dan_Shappir:
it
 
Aj:
you
 
Dan_Shappir:
again,
 
Aj:
find code that I was written
 
Dan_Shappir:
and
 
Aj:
before
 
Dan_Shappir:
to another
 
Aj:
I was born?
 
Dan_Shappir:
subsystem which would do it again. got parsed
 
Aj:
That,
 
Dan_Shappir:
multiple
 
Aj:
I don't know what
 
Dan_Shappir:
times
 
Aj:
I said. Yeah, how did you find
 
Dan_Shappir:
each
 
Aj:
code
 
Dan_Shappir:
time
 
Aj:
that
 
Dan_Shappir:
just
 
Aj:
I had
 
Dan_Shappir:
to
 
Aj:
written
 
Dan_Shappir:
extract
 
Aj:
before I was
 
Dan_Shappir:
a
 
Aj:
even
 
Dan_Shappir:
particular
 
Aj:
born?
 
Dan_Shappir:
value.
 
Aj:
I've done that.
 
Dan_Shappir:
So... Now that you've written, you mean...
 
Aj:
Oh, I. I didn't, yeah, okay.
 
Dan_Shappir:
Well,
 
Aj:
No,
 
Dan_Shappir:
first
 
Aj:
I'm not
 
Dan_Shappir:
of all,
 
Aj:
Dan,
 
Dan_Shappir:
I assume it's
 
Aj:
I'm
 
Dan_Shappir:
not
 
Aj:
not.
 
Dan_Shappir:
before you were born,
 
Aj:
Thanks
 
Dan_Shappir:
because
 
Aj:
for bringing
 
Dan_Shappir:
like I said,
 
Aj:
that
 
Dan_Shappir:
it
 
Aj:
up.
 
Dan_Shappir:
was about 15 or something years ago, and I assume you're older than 15. Ha ha! Ha ha! But anyway, by the way, that kind of a pattern is something that I've encountered, unfortunately, many times throughout my career. Not to such an extreme degree, but that concept of due to encapsulation, performing that same operation on data multiple times in different subsystems or is something that I've definitely encountered multiple times, in which case you need to be able to view the system as a whole, realize that that's taking place, perform that parsing at a higher level so that you can
 
Aj:
So
 
Dan_Shappir:
pass the
 
Aj:
I'm
 
Dan_Shappir:
data
 
Aj:
actually working
 
Dan_Shappir:
in
 
Aj:
on
 
Dan_Shappir:
a
 
Aj:
something right now where I am
 
Dan_Shappir:
format that
 
Aj:
kind of
 
Dan_Shappir:
actually
 
Aj:
doing that.
 
Dan_Shappir:
matches what
 
Aj:
But
 
Dan_Shappir:
the different components need
 
Aj:
the,
 
Dan_Shappir:
without each and every
 
Aj:
I mean,
 
Dan_Shappir:
one
 
Aj:
it
 
Dan_Shappir:
of
 
Aj:
could,
 
Dan_Shappir:
them needing
 
Aj:
it couldn't
 
Dan_Shappir:
to go through
 
Aj:
come back to bite
 
Dan_Shappir:
that
 
Aj:
someone
 
Dan_Shappir:
process independently.
 
Aj:
in the future, but it's basically, it's easier to pass around the string value. That's an encoded representation of some data than it is to pass around the data. It's just generically more useful to be working in the string format expect it. But that I think in my particular case, and in the cases where I've done that, it kind of falls into that 5% scenario you were talking about where the bulk of what the application is going to be doing is not parsing this string and passing it around. The bulk of what the application is going to do is other stuff and the string is just useful. But I do find it odd that a parser would choose that strategy because it seems writing a part or you know that everything needs to have that information all the way down.
 
Dan_Shappir:
Yeah, well, unfortunately, that's what they did. So basically what I did, my main fix, so my primary fix was essentially just to parse that XML once, hold on to that parsed tree of objects, and then pass that around instead of passing around the XML string, and basically reduce the number of parses four or something like that to one. Also, I used a more efficient parsing engine or a parsing strategy, and also I even did some data sanity checks on the raw string before I even parsed it the first time. So if the string was bad, because that could happen, you could cut off before you even parsed it, but just doing a string search or something like that. The end result was that after that optimization, the Java version actually ran faster than the C++ version. And by the way, after all these optimizations, then the previous optimizations that I did actually started to make an impact as well, because now they weren't just the 5%, now they were suddenly something like a 30%, because I had removed a lot of the other overhead. So now those optimizations actually started to make an impact as well. But that was just icing on the cake, as it were. The important lesson that I've learned and something that I now do whenever I'm tasked with actually improving the performance of any system is first I make sure that I'm actually able to measure what the current performance is. Then I make sure that I'm able to actually analyze that performance That way I identify the bottlenecks, focus specifically on those bottlenecks, and after I make these changes and verify that I've, all those changes that I've made actually have made an improvement, then I effectively just repeat the process. have a system in place to actually monitor the performance of the system, each time I focus on the most significant bottleneck, I optimize that. Again, only after I've verified based on the real data, not my assumptions, that that is actually the bottleneck. And by the way, very often when I speak with the people who built the system, because this is something that I'm often brought in to do, they are usually surprised. identified to be like the main bottleneck. They have their various ideas, it's probably here, it's probably there, and they're almost always wrong. Even though it's their system, the system that they built and they have intuitions about it, usually their intuitions are wrong. And like I said, after I address a particular bottleneck, I don't immediately try to move to the next bottleneck You know the bottlenecks tend to change it's like you know when you have like a freeway and like there's a certain intersection where it tends to have traffic jams and they make like changes and they are driven now those traffic jams are Gone, but what actually happens is that those though that traffic jam you often moves down the line And it might move to somewhere where previously there was no traffic jam because you know there was a trickle of coming in from the previous one, so a traffic jam never formed. But now that that previous point was addressed, that's when the traffic jam moved. So like I said, I usually repeat that process iteratively until I get to that point where we can say, OK, now performance is good enough. Or alternatively, the cost of improving performance from this point going forward is to the extent that you know we need to consider our options. But that's a really important lesson that I've learned. And fortunately, not that late in my career or sufficiently early in my career, that you should never assume that you can figure out where performance bottlenecks are. You always need to measure and identify them based on either real-world data or just profiling real-world scenarios.
 
Steve:
Well, that would make sense considering the well-known adage about the danger of assuming, period, regardless of what you're assuming about, right?
 
Dan_Shappir:
Oh yeah, for sure. I'm a huge proponent of doing anything that you do based on data. Now you don't always have data. A lot of us operate in situations where data is lacking or insufficient, but often you have more data than you actually realize, especially if you're willing to put some effort into properly collecting it. And by the way, it kind of leads to situations in which I kind of have to push back on people. So for example, you know, I'm brought in to help with a project that has performance issues. And like, and I noticed that the that projects does not have sufficient data being collected. I resist, you know, attempts to push me to try to start optimizing it or tell the team how to before I get sufficient data. And sometimes, you know, I get pushed hard and I push back because I know that if I start optimizing with insufficient data, like I said, either I'll optimize the wrong thing or maybe I'll optimize the right thing, but I wouldn't know that I actually made any difference. You know, if I'm not properly measuring the system, how do I know that I'm actually
 
Aj:
There's
 
Dan_Shappir:
improving
 
Aj:
no argument
 
Dan_Shappir:
things
 
Aj:
to be made.
 
Dan_Shappir:
making things worse. You're all nodding in agreement, so I assume you
 
Steve:
Yes,
 
Dan_Shappir:
agree with me.
 
Steve:
another. All right. Which is
 
Dan_Shappir:
And
 
Steve:
a bummer.
 
Dan_Shappir:
by the way, like I said, and I'll repeat it again, I think that the Performance tab in the Chrome DevTools is an amazing tool that, unfortunately, a lot of web developers are not sufficiently familiar with. And like I said, its primary purpose is obviously to improve the performance of your system. But it gives an excellent overview and understanding of how the system actually works, which functions actually called, in which order, you know, where the system actually spends its time is really important when you try to kind of figure out how things work and which functions actually end up calling which other functions.
 
Steve:
Yeah, the dev tools I live in there when I'm working on the JavaScript side. There's, I even, you know, hear podcast episodes or read articles about all the things you can do with the dev tools. And sometimes I feel like even with everything I use, I still have only scratched the surface of some of the, the help that you get from, from using those tools, some of the tools that are in the dev.
 
Dan_Shappir:
Yeah. Unfortunately, it's kind of a complicated tab. It also has this weird thing where it has two different timelines in the same tab. So you have a top timeline and a bottom timeline, which is kind of the window into a particular section of the top timeline. So people tend to find it really confusing to the extent that the Chrome team has actually DevTools tab called Performance Insights.
 
Steve:
Yeah, I've seen that.
 
Dan_Shappir:
Just because the Performance tab tends to be overly complex to a lot of people. So the Performance Insights tab is like a Performance tab light, as it were. It doesn't really add information that you didn't have before. get certain information.
 
Aj:
Alright,
 
Steve:
Alrighty, AJ-
 
Dan_Shappir:
Anyway,
 
Aj:
I don't know if
 
Dan_Shappir:
that's
 
Aj:
mine's
 
Dan_Shappir:
my,
 
Steve:
Oh,
 
Aj:
gonna be
 
Steve:
sorry.
 
Aj:
as cool,
 
Dan_Shappir:
no,
 
Aj:
but...
 
Dan_Shappir:
cool, I was just summing up. That's my war story, so now it's somebody else's
 
Aj:
I'm gonna try
 
Dan_Shappir:
turn.
 
Aj:
to do something that I'm great at.
 
Steve:
All right AJ,
 
Aj:
Take a
 
Steve:
your
 
Aj:
two
 
Steve:
turn.
 
Aj:
minute story and turn it into a 20 minute story. So.
 
Dan_Shappir:
Obviously not, you know.
 
Aj:
Back in the olden days, the job that ripped me out of college was a company that
 
Steve:
Great.
 
Aj:
designed radars. It's actually a spinoff of the company that makes the radars that are above the stoplights and the traffic lights and stuff. So when you go and pass a traffic light, you can look up and you can see these little boxes that are counting cars. And that company is called MSAR. And that that company had a spinoff company that at the time did work that through all reasonable assumption was for the military. Because some of the investment came from MQTEL and in QTEL is an investment arm of a three letter agency. So who knows who we were doing work for? I can't possibly say I'm certainly not allowed to say. public information that there was they got funding from In-Q-Tel so you know you can draw some lines and maybe come up with some guesses and as you've both of y'all have probably experienced have you done work for military or government before?
 
Dan_Shappir:
I've like it's like I said I used to work at a company that did work for banks
 
Aj:
Well,
 
Dan_Shappir:
I don't know if they're any
 
Aj:
I
 
Dan_Shappir:
better
 
Aj:
mean, just things
 
Dan_Shappir:
than the government
 
Aj:
are
 
Dan_Shappir:
in
 
Aj:
things
 
Dan_Shappir:
this
 
Aj:
are
 
Dan_Shappir:
regard.
 
Aj:
slow. Things are inefficient.
 
Dan_Shappir:
I actually served
 
Aj:
Things are convoluted.
 
Dan_Shappir:
in the army
 
Aj:
There's a lot of
 
Dan_Shappir:
And I did work
 
Aj:
kind of faux
 
Dan_Shappir:
for
 
Aj:
secrecy,
 
Dan_Shappir:
the army
 
Aj:
things
 
Dan_Shappir:
while
 
Aj:
that don't
 
Dan_Shappir:
in the
 
Aj:
need
 
Dan_Shappir:
army,
 
Aj:
to be secret.
 
Dan_Shappir:
but
 
Aj:
But
 
Dan_Shappir:
I don't know
 
Aj:
if you
 
Dan_Shappir:
how
 
Aj:
can't
 
Dan_Shappir:
that
 
Aj:
prove
 
Dan_Shappir:
compares
 
Aj:
that it shouldn't be secret, then sometimes it is by default. And so you get a lot of, uh, this complication of key people don't have the right information because it's being kept from them. So you're, you're trying to have two companies work together, but you tell the two companies that they're working together. And then you don't tell them what they're doing. You just give them vague specs that was created by yet another company. And so you end up, by the time you finally get in the room together, it's just a hot mess. And the conversation basically says, so wait, all we needed was GZIP and JSON, and we spent six months developing this other thing instead. Oh, too bad we couldn't have had a conversation to understand that's what was actually preferred and desirable. Anyway. that was kind of the environment was we we knew that our systems were going to have to work with Windows computers and that our customer only had approved Internet Explorer as the web browser because you want the web browser that's the easiest to get spyware through to be the certified one that way the the upper management can always listen in on the lower. I have no idea what their reasoning was, but for some reason Internet Explorer was the approved browser and what we were working on.
 
Dan_Shappir:
Well, it can be either really bad or not that
 
Aj:
5.5
 
Dan_Shappir:
bad, depending
 
Aj:
baby!
 
Dan_Shappir:
on when the story took place. You know, there were some times, you know, if
 
Aj:
Yeah.
 
Dan_Shappir:
you go back enough,
 
Aj:
So this
 
Dan_Shappir:
there
 
Aj:
was
 
Dan_Shappir:
were times where Internet Explorer was
 
Aj:
circa
 
Dan_Shappir:
actually
 
Aj:
2009,
 
Dan_Shappir:
the best browser
 
Aj:
I want to say,
 
Dan_Shappir:
available.
 
Aj:
give or take a year.
 
Dan_Shappir:
But yeah, something like that. Up until then.
 
Aj:
I could Google that for you.
 
Dan_Shappir:
Oh yeah, by that point in time Internet Explorer was way past it. It's prime. When did Chrome come out? Anybody remember?
 
Aj:
I remember Gmail coming out, and I don't think, I think Chrome
 
Steve:
or
 
Aj:
was
 
Steve:
early
 
Aj:
announced
 
Steve:
2000s,
 
Aj:
the next year.
 
Steve:
I
 
Dan_Shappir:
I
 
Steve:
want
 
Dan_Shappir:
could
 
Steve:
to say
 
Dan_Shappir:
Google
 
Steve:
2003,
 
Dan_Shappir:
that as well.
 
Aj:
Anyway,
 
Steve:
2004
 
Aj:
the point being
 
Steve:
maybe,
 
Aj:
that,
 
Steve:
I don't remember.
 
Aj:
so what we were trying to doing is we're taking this radar that's working with, it's producing images of how far and how close things
 
Dan_Shappir:
Yeah, w-
 
Aj:
are and trying to map them in a way that's really easy to understand and see. And the way that we had done this up to this point was, so this was back before Raspberry Pi. So Raspberry Pi came out, while I was working for this company. And I don't know, I guess I probably shouldn't say what we actually used, but just in case they still use it, because I think that they chose that vendor because that vendor had a industrial 20 year guarantee on being able to produce the same chips. Uh, I don't think it's going to be that extreme, but I don't know. I just, I don't know where the intellectual property has gone or what's become
 
Dan_Shappir:
You
 
Aj:
of
 
Dan_Shappir:
don't
 
Aj:
it
 
Dan_Shappir:
want
 
Aj:
or,
 
Dan_Shappir:
some military
 
Aj:
you know,
 
Dan_Shappir:
police
 
Aj:
at
 
Dan_Shappir:
or
 
Aj:
this
 
Dan_Shappir:
something
 
Aj:
point, I would assume
 
Dan_Shappir:
knocking
 
Aj:
that
 
Dan_Shappir:
on your
 
Aj:
they've
 
Dan_Shappir:
door.
 
Aj:
moved
 
Dan_Shappir:
Is
 
Aj:
on
 
Dan_Shappir:
that
 
Aj:
to
 
Dan_Shappir:
what
 
Aj:
a
 
Dan_Shappir:
you're saying,
 
Aj:
better
 
Dan_Shappir:
AJ?
 
Aj:
chip set, but. I shouldn't, I shouldn't assume that because I, cause the reason that they chose the vendor they chose was because of the long support contracts. So even if the chip manufacturer, you know, there's, there's various, uh, chip
 
Dan_Shappir:
Never assume AJ.
 
Aj:
Intel and arm and Texas instruments and at at Mel and you know, so even even if the chip manufacturer had continued to produce the chip or did could could stop producing the chip, they had a stock supply of it or whatever. So at the time we, we had an actual. for PlayStation style yellow cable that you would a composite cable that you'd plug in and the video processing was being done on the device and it was being written to a buffer pipe. So it would just try to write one frame and then the next frame and then the next frame and the next frame and it was extremely rudimentary. background and then red dots and then yellow squares with crosshairs going around the the dots and I think that the yellow squares were supposed to indicate the margin of error and but it was basically it was and it was very very similar to the data that actually came back from the radar so the radar is reflections back. And if you basically just scale that out, because you know what the distances are, and you know what the angles are, then it was it was not I don't think it was quite one to one. But the image that we got back from the radar, and then the way we painted the image out to the TV screen, we're, we're pretty close. But I think that that original one was done on basically a developer kit board. So if you even now that there's a microcontroller that's pretty popular called the blue pill. And you can get a developer kit version of the board and it's got more features. It has more inputs and outputs. It has more, it just has more stuff on it than the actual $5 blue pill. Cause the $5 blue pill is paired down so that it has just the minimum amount so they can reach that $5 price point. And then if you build around it, just to add more stuff onto it. So if I remember correctly, version of this radar was built on this development kit board. And then they stopped selling the development kit board and then they and they did an end of life on that particular feature set that came on the board. So I don't remember if the processor was still available, but there was there's some combination of features that were no longer available. And so we were going to have to switch away from this system. Anyway, it was it was we had to end of life it but you know, they were, or our customers, whoever they were, you know, they were perfectly happy to just plug in this yellow cable, have it, you know, you turn the TV on, you plug in the yellow cable, boom. But it, and it gave distances and the distances were scaled fairly accurate. So it'd say, you know, 50 meters out, there's this target, it's moving in this direction, it's moving closer or further left or right, and you could watch it move as the frames progressed. And there was also an XML packet system replicated that same information. And in an early version of us trying to get this thing to work for the next processor that we were going to use, we actually took these frames and we sent them across the wire as raw in a way for NTSC television. So they weren't in JPEG format or anything. It was just raw bytes. But in a very early version, I don't know if we even released this, but it was just to help us transition and get everything working again. We sent those bytes across the wire and we had to process it from that into a JPEG. And again, it was nearly one-to-one, this raw data and what we needed needed to show, but it just wasn't in a recognizable format. It wasn't in an RGB format and we had to translate it. And this isn't actually the point that I was building up to, but just as an aside, since we're talking about optimization, as part of that process, it turned out that we basically had to iterate over every pixel, so to speak, of the data, and then turn it into an actual PNG pixel. It was too slow. It was producing. I don't know. One every five seconds. I'm just making up a number. I don't remember what it was, but it was something that was just, it was just too unacceptable. There it just, it wasn't real time enough to be useful. And I had probably watched some recent JavaScript optimization talk and I just started fiddling because I didn't know quite how to profile it. Okay, well, you know, maybe there's a couple of things in this loop of translating the data that we could make more efficient And I I do remember there was two things at the time es5 was new it wasn't fully implemented and so the for each function was not yet optimized and and The two things that I did that brought it down from something like five or ten seconds down to 80 milliseconds were that and you shouldn't do this today because today These things run at native speed. They've been optimized you know unless you find that you actually really need to change the way you're doing it, but but I Swapped out a for each for a for loop and Then there was a place where just naivete or whatever reason rather than doing a plus equals Through the the iteration I was doing at times By the index I was saying that you know row value is I in times I or something like that Yes, exactly. Well said.
 
Dan_Shappir:
So
 
Aj:
You
 
Dan_Shappir:
like
 
Aj:
got exactly
 
Dan_Shappir:
instead
 
Aj:
what I was
 
Dan_Shappir:
of
 
Aj:
trying
 
Dan_Shappir:
doing
 
Aj:
to
 
Dan_Shappir:
like
 
Aj:
pull out. Yes,
 
Dan_Shappir:
adding
 
Aj:
that's exactly
 
Dan_Shappir:
like
 
Aj:
what I was doing.
 
Dan_Shappir:
the number
 
Aj:
So
 
Dan_Shappir:
of pixels per row to the
 
Aj:
each
 
Dan_Shappir:
counter,
 
Aj:
of those changes
 
Dan_Shappir:
you were
 
Aj:
was
 
Dan_Shappir:
multiplying
 
Aj:
incredibly
 
Dan_Shappir:
the row
 
Aj:
significant.
 
Dan_Shappir:
number by the width
 
Aj:
It required
 
Dan_Shappir:
or something
 
Aj:
both
 
Dan_Shappir:
like that.
 
Aj:
of them. But it got it down to where it went from taking, like I was saying, 10 seconds to render an image to about 80 milliseconds, if I remember correctly. So we went from nowhere near real time and cycles to spare. And anyway, this was the intermediary stage. But what we ended up doing was Google Maps had already come out. And there was also OpenStreetMap and Yahoo Maps. It turned out they all use the same grid system, which I can't remember the name of it right now. But it's an imperfect system that's skewed. It's based on flattening the world. But when you zoom in close enough, are accurate enough when you zoom out the squares are really really inaccurate because you know square that represents Alaska is is much larger than it actually is whereas the the square of something that represents something at the equator is is smaller than it ought to be you know it's one of those type of of scaling issues but we we were able to get the Google Maps downloaded or I mean I don't maps, the free stuff that had no copyright entanglements, which of course is what we encouraged our customer to use, was the one that had no copyright entanglements, of course. So we were able to ship the demo of the product with the open API, open street maps or whatever. And then if they happened to read that we reverse engineered that from the Google Maps and But this, we created this great map system where we could overlay and then we could get the position of those targets and have them, you know, everything was scaled properly and have the targets move on the map. So you could see if something was half kilometer away, a kilometer away, you'd see it on the real pre-downloaded pixels or map tiles. And the thing is, we couldn't do this in Internet Explorer because we, I don't remember what the challenges were, but I mean, even starting from getting, getting some of the data processed to just, just having the right APIs to use for, um, I don't even remember where they were, but I, I know that, that we, we didn't have this thing when we, we worked on this, this version, we did not have it working in an Explorer and we just didn't think it would be feasible to get working you know, we had to make it work on an explorer and we basically just said, I mean, we meaning I, I just said no. And we built it to work in Firefox. I don't know if we, I don't know if we had it in Chrome, but we built it to work in Firefox and we showed them the demo of this map system that was far superior to the system that they had before where they and get the blue screen with the red dots. And I think there was some amount of heat map-esque-ness to it where maybe it had some green or something too. But, you know, we replaced that with the open street maps view and these, you know, live moving targets that are updating several times a second. And in the web interface, you could actually pick what color you wanted them or you could choose to ignore one. So we had all of these features that we would be able to deliver in this next version of the product and When they saw it they figured out how to make an exception to get Firefox as an approved browser for this specific application And this is something where that this is the reverse my experience to date Would not have me make that decision again. You know, if I, if I were to go back or not, if I were to go back, but if I were to have some similar situation presented self again, I would not try to fight against the powers that be because most of the time the powers that be when and the powers that be are providing your paycheck, right? And it's, it's one of those things where only the blissful idiocy of youth allowed me to create a solution that I mean, it was a good solution. Um, there wasn't a better alternative. It was, it was, it was leaps and bounds ahead of the alternative, but just the idea of kind of, uh, in a way, sticking it to the man and saying, no, we're going to provide a product so good that you're going to change your rules and your policies to fit our product into your, uh, set of bureaucratic tape. I don't know that I've got the chicharrones to attempt something like that with all of the other experiences that I've had since that time. It was just such a I don't know if it was quite fluke would be the right way to call it but it. It was fortunate and there may have been some other forces because it was a life-saving device from you know these these devices we suspect were used in areas where people with guns were coming up against people who were trying to defend a Particular area that was being held and so it may have been that the life-saving benefit of it was so great Which is not what I was thinking about. I was just thinking about, you know user experience looking cool, you know, just looking, being something that no one else had created before, being head and tails or head and shoulders above the previous solutions or anybody else's because we had a couple competitors. Nobody had something like this. Yeah, yeah. So they actually were able to change their regulations and they found a way to get Firefox
 
Dan_Shappir:
So
 
Aj:
approved
 
Dan_Shappir:
they ultimately
 
Aj:
for
 
Dan_Shappir:
ended
 
Aj:
the
 
Dan_Shappir:
up choosing
 
Aj:
the
 
Dan_Shappir:
your
 
Aj:
systems
 
Dan_Shappir:
improved solution
 
Aj:
that this was
 
Dan_Shappir:
despite
 
Aj:
going to be deployed
 
Dan_Shappir:
the fact
 
Aj:
with.
 
Dan_Shappir:
that it didn't work on Internet Explorer?
 
Aj:
So, but yeah, but I, like I said, I don't I don't know if I would be able to do that again today if I had the experience and the the wisdom that I have now. I
 
Dan_Shappir:
Cool.
 
Aj:
don't know if I would have been that that gutsy.
 
Dan_Shappir:
I don't know if you'll have time for it, but I have a funny story where I did not have the guts or the foresight or the know-how to tell no, and the consequences were a bit unfortunate. If you want, I can.
 
Steve:
Yeah, I got a story. And then we got time after that, then Dan, we can squeeze yours in. I don't see why not. From a time standpoint, mine won't take nearly as long as yours guys did. This is, I guess you call it a success story. To fit in the old cliche of thinking outside the box. So back in my last place where I was working on Drupal, this was back in about 2019. I was working for a very, very large international mega corporation and manufacturing industrial type corporation. And we are working on re-releasing our site. Just for context, the original older site was on something like Classic ASP and Microsoft 2000, some sort of framework and SQL Server and just being held together with rubber bands and paper clips type of thing. So we had brought open source into the organization my boss had and we're using Drupal. And it was sort of a, I don't know if you call it a pyramid or tripod type approach where you had two Drupal sites on the back end that were data repositories. One was for just product information and one was a damn digital asset management type site. So your images and documents and graphics and logos and all that kind of stuff. And then instead of just using a standard front end, we indexed everything into Apache Solar and then used Solar as our data for the front end. Just because it's faster. Well, the problem, and this is sort of a side note, was that we were still using PHP templating on the front end. So I spent more times dealing with caching issues and performance issues because of that as compared to using something maybe a little better and then to pull from solar. But the issue, one issue that came to be very crucial was 301s, 301 redirects. So with a very, very, very large site and amount of data that we had on the old site, you know, we would take a major hit if all of a sudden all our index links and old links stopped working when we launched the new site. And at this point in time, This site launch had already been delayed by a number of months just because it wasn't ready at the pre original originally planned launch date wasn't even close and So we were coming up again on our next launch date and we had one guy that was It was funny. His slack logo was like a highway sign with the 301 on it because that's all he worked on was 301s and how do you store them in the site in a way that they can be, you know, accessible? And we had literally hundreds of thousands of them. It was, I put in 250,000 at a time, sometime. And we were trying them in text files and all types of, all types of different mechanisms and ways to access them. But it was just, everything was just killing performance just because of the size. The pure size and the number of 301 redirects that we had. So, It's Monday evening, we're supposed to launch this Saturday, so within five days. And my boss is still pulling his hair out, couldn't get anything to work. And I had an idea, so about five o'clock that evening, a Monday evening, he called them up and I said, hey, I know you're still working on this still, how's that going? Well, not good, I can't get this and that. And I said, I know we haven't thought about this, but why don't we use solar for stashing our redirects and then just access them from the site? we can do it so and so. And he thought about it for about five seconds. He goes, damn, that's a good idea. And so it was the end of the day. So I said, yeah, let's do that tomorrow. I know exactly how we could do it. I was pretty familiar with solar having been in charge of maintaining our solar instances and clusters and configurations and all that. And so the next morning I go log on and my direct report, my supervisor that I'd reported directly, Slack said, go Steve, go, because he was so excited because this was just like the last big issue that was really holding up the launch of the site. And so what I did was went in and built a new index, you know, to find the fields in the solar configs, you know, find everything we did. And then the other guy that had been working on it, Todd, he and I worked
 
Aj:
So
 
Steve:
on
 
Aj:
I
 
Steve:
some
 
Aj:
want to
 
Steve:
code
 
Aj:
understand
 
Steve:
both
 
Aj:
this technically
 
Steve:
on
 
Aj:
a little bit better because when I'm thinking
 
Steve:
how to
 
Aj:
if
 
Steve:
run
 
Aj:
you
 
Steve:
the solar query
 
Aj:
if
 
Steve:
to get
 
Aj:
you
 
Steve:
what
 
Aj:
have
 
Steve:
you want
 
Aj:
all
 
Steve:
and then
 
Aj:
of these
 
Steve:
redirect
 
Aj:
when you said file
 
Steve:
to the
 
Aj:
based
 
Steve:
appropriate
 
Aj:
I'm thinking
 
Steve:
URL. And
 
Aj:
something like an
 
Steve:
then I also wrote a couple Drush commands. Drush is a sense for Drupal shell. It's a command line utility, sort of like tinker or console
 
Aj:
So you
 
Steve:
commands
 
Aj:
just needed.
 
Steve:
in Laravel or any other type of server side
 
Aj:
Oh, okay.
 
Steve:
language
 
Aj:
Okay.
 
Steve:
utility
 
Aj:
I see what you're saying. And
 
Steve:
where
 
Aj:
so putting
 
Steve:
people
 
Aj:
in a
 
Steve:
could
 
Aj:
CSV. Wasn't,
 
Steve:
just run it and
 
Aj:
wasn't
 
Steve:
put a particular
 
Aj:
fast enough.
 
Steve:
redirect into solar
 
Aj:
And so in
 
Steve:
them
 
Aj:
Apache
 
Steve:
up and so on and then we just
 
Aj:
is
 
Steve:
did
 
Aj:
that,
 
Steve:
a lot of bulk updating.
 
Aj:
was
 
Steve:
So
 
Aj:
that better than
 
Steve:
long
 
Aj:
trying
 
Steve:
story short,
 
Aj:
to use it in the edge
 
Steve:
well
 
Aj:
there
 
Steve:
that's not
 
Aj:
for
 
Steve:
really
 
Aj:
that
 
Steve:
short,
 
Aj:
use case?
 
Steve:
but the end result was that we were the launch of site on time. You know, we had, there was always ongoing issues with maintenance and you know keeping up and maintaining the redirects and maybe some tweaks to the tools and other things, but it was incredibly fast and I always intended to write a blog post for us or maybe around Pantheon because hosting provider that we were using. Never did get around to it, but the whole thing that made me proud, I guess, of my idea was that one, we were under pressure, had to come up with an idea, and I just sort of thought out of the box in a way that people hadn't thought of, and was able to make it work, implement it, and handle hundreds of thousands of redirects in a pretty efficient manner. Even just a text file, CSV file, you know, some sort of simple file structure, you know, you have basically where you're mapping old URLs to new URLs. Yeah. Well, I mean, Solr is designed for search. I mean, it's key value store basically, but it's designed to find things quickly as compared to a SQL table. And it was, you know, I wrote, we had all our data in SQL, but we're using Drupal and as a, you know, as the PHP template on the front end. And just as a proof of concept, I wrote a view front end that searched a given index and, and, and pagination and all kinds of stuff. was off the charts. It was so much faster going through Solow. I could literally click and it was instantaneous paginating through results, finding results much quicker than anything I'd ever seen in SQL. Just because like I said, it's designed to quickly find things, you know. And in this case, there wasn't a lot of text manipulation you had to do. You know, you weren't worrying about index indexing your data so that it's easy to find. It was really just, here's one URL, find another URL. And from a size standpoint, you know, putting stuff in solar, you know, we had a lot more room there, I guess, and flexibility than we did, you know, dumping hundreds of thousands of records into, you know, MySQL, which is the database that we're using as our backend. Yeah, it's the main source for the Drupal, but you know, that was Drupal 7. So not only did you have data, you had configuration, uh, stuff in, in Drupal as well, it wasn't until Drupal 8 that they went with an external configuration management system using, um, some sort of ML files, I forget. Mm-hmm.
 
Dan_Shappir:
And like that's a key takeaway that I take from this, is that we generally like to reuse the systems that we already have in place. And very often it does work, but sometimes it doesn't. And then it's worthwhile, like you said, to kind of think out of the box and think what are the actual constraints that they have on the data? Maybe I can use a different system that actually leverages these constraints gain better behavior or better performance. And like you said, given that you were just mapping one string into the other, something that's a key value sort of a map is potentially much more efficient than just working, you know, searching through some sort of a database that, like Ajay said, is loaded and is much and provides a lot of functionality that you don't actually need in this case.
 
Steve:
Yeah, I need to go back and correct one thing, AJ. So the database was used to store data, but the front end was actually pulling from Solr. So all of our data, you know, products and paths, logos and graphics and that kind of stuff was in Solr and front end was pulling from that, but it was still being used, you know, for writes, you know, adding data and that kind of stuff, plus configuration information. So, but Solr, we had a lot more, it was easier to upscale and change things as needed than it was from a MySQL standpoint. And as far as I know, that site is still, as I understand it, that site is still up and running using that same infrastructure.
 
Dan_Shappir:
That's always cool to find something that you did a while back and see that it still works and serves its purpose.
 
Steve:
Yep. So Dan, you said you had one more
 
Dan_Shappir:
Yeah,
 
Steve:
short story.
 
Dan_Shappir:
so I'll tell it quickly or briefly. So this one takes me way, way back. So I'm talking actually the dot-com days, late 90s. I was work, oh yeah. Okay, if you say so. Anyway, so we're talking late 90s and I'm working at this hot startup. And what we are doing is which could leverage unused bandwidths. What I mean is that back then, if you wanted to watch video online, streaming protocols were just not practical given bandwidth constraints that existed back then, and the protocols that existed back then, it just wasn't effectively workable. So if you wanted to actually watch a video you know, that you from the web, you would actually need to download that video in order to watch it. And who wants to watch a video that you need to wait, I don't know, two hours for it to download because before you can actually do anything with it. So the system that we had in place was one that would be was able to download large chunks of data in the background by using by like leveraging the unused bandwidth. browsing the web, you download stuff into your browser, but then you basically just read the site that you downloaded. So most of the time, the network connection is available. But if you start a big download and you don't get out of the way once the user clicks a link in the browser, then you degrade their experience. So what you want is a system that can identify when network bandwidth is available, start downloading stuff. automatically pause that download immediately whenever it discovers that the user actually requires their bandwidth. And that's kind of like the intellectual property that the company that I was working at at that time had kind of invented, this kind of a mechanism for downloading large stuff in the background without adversely impacting your browsing experience. So that you could say, and stuff like that. And when we implemented it, when we started implementing the client side for this, it was like a custom client that used the web or the internet, but it wasn't like browser-based. It was like custom software that you installed on your computer. And what people had before Windows 95 was something was Windows 3.1 or 3.11, which were terrible systems. They were like 16 bit. They didn't really multitask. There was no Well, I worked in DOS before, but I'm not going that far back. All I'm saying is that Windows 3.1 or 3.11 as an operating system was pretty abysmal because it was not much more than a sort of a visual facade on top of DOS in a lot of ways. So it was like, you know, all the processes ran in the same memory space. It was just 16-bit, not 32-bit. Like the processes had to relinquish control and and whatnot. It was pretty bad. And so we said hey, you know Windows 95 has just come out what you know, it's gonna you know, why support those older systems? There's no point so we wrote our client software for Windows 95 using all the new capabilities that were available in this much more quote-unquote modern operating system and then we brought in And one of the investors that we brought in was SoftBank. You know, the guys that invested in WeWork, in Alibaba, and various other companies. They exist still today. They're pretty huge. And they liked what they saw, and they were willing to invest millions of dollars in the company, but they said, but they had a catch. They said, our market analysts say that it's going to take years, maybe even years, to even decades for Windows 95 to replace those older systems. So if you want our millions of dollars, you need to support those older versions of Windows as well. So you need to backport your client software to also run on the Windows 3.1 or 3.11 in addition to Windows 95. And I'm talking, this is like 97 like that. And we kind of tried to, and we kind of, and unlike you, while we did try to argue at the end of the day, they were adamant and it was their money. So we formed within the company a three-man team and think about it for a startup to allocate three people to work on something that's like, I don't know, that was like a third or a quarter of an entire development team. And we We worked on, and I was part of that team, fortunately, unfortunately. And we worked on it for like a whole year, backporting that system. So it's like three man years of backporting that software to run on Windows 3.1. And we succeeded. Amazingly, it worked. So we actually were able to release that as a product. Now, the interesting thing is, because it had telemetry. So whenever somebody, it was like for the consumer market, but whenever somebody installed that software on their computer, you know, we would get information about it. So we knew how many people were using our software at any point in time and where they were coming from and whatnot. So like, after we released it, we had, I don't 3.1 can you guess? No, it wasn't. There were something like three users. Yeah, so for the win. So we invested something like three man years of effort on supporting three users because that was a condition for getting the money from SoftBank. Yeah, well on the positive side, we did get the investment money and we were able to go public before the dot-com bubble burst and the company did reach evaluation of a couple of billions of dollars before it went back to nothing. So you know, I was able to exercise some stock options and it did help me buy my first house. So you know, it turned out good in the end. But I wish we you know, you're idiots or whatnot and please give us your money without forcing us to create this version that nobody needs and nobody asked for. Well, it's, well, you know, the big cost is the alternative cost. It's like the thing that we probably would have been able to deliver other things much, much earlier. It's really difficult for me to try to guess what would have happened if any, if things would have turned out better or worse, you know, who knows? I think it would have done a big service to the said, you could say that the company got some millions of dollars for this project, even if at the end of the day, that project wasn't really needed for the success of the company. So effectively, you could say that we sold this as a product to SoftBank for the investment bucks. So I really can't say at the end of the day, like how much of a difference it would have made. like a third of your R&D team on something that's totally not beneficial for a startup company for a whole for I think it was something like a whole year I can't imagine that being a good thing for any startup Or like that episode on Seinfeld? Yeah, there's an episode where he's being chased by a library cop for a book he supposedly didn't return when he was a teen or something like that. Thanks for watching! Well, to be honest, I don't know. That's a question for the guy who was the CEO of the company, who's by the way now, I'm still kind of in touch with him. And he's like a billionaire. So I imagine that his life turns out okay, despite this episode. So everybody came out ahead in the end, except maybe Softbank, although they made a shitload of money off of us. Alibaba but Okay, edit. It made a lot of money off of selling their investment in Alibaba. But we can beat me. I guess everybody will guess which word I used. So I don't know what the consequences have been. I assumed the fact that he had us still working on it for that entire duration meant that either he had bigger fish to fry or he just couldn't get out of it. Um, you know, uh, but again, I, at, at that point in time, I was just a junior developer doing what I was told. So it was what it was.
 
Steve:
Alrighty, so with that we're going a bit long. So we're going to turn to pics Pics are the part of the show where we get to talk about things We like to talk about that may or may not have to do with tech books food movies ATV crashes you name it we can talk about it. So let's start with Dan What do you got first for pics Dan?
 
Dan_Shappir:
Okay, so for my first pick I actually attended a conference in Israel yesterday and today this time not as a speaker but as an attendee and I had a great time it's called the reversing conference so basically it's just like going in reverse but I don't know like it's a sort of a wordplay it's and they started a conference. So, you know, that's what it's about. And I had a great time there. It was an in-person conference and there were a lot of, you know, hallways interactions, which were great. But what I specifically want to call out is that a person that we've had on our show as a guest, Moran Weber, I forget which episode it was, in the background. But she gave a talk about, titled, Code Like a Girl, Breaking the Gender Stereotype. And it was an amazing talk. And she just like, she gives, you know, real world data about stuff like why it is that, that women are still to this day, relatively underrepresented in the tech industry. way that up until something like the mid-80s, you know, working with computers was actually considered a quote-unquote feminine job, something that's appropriate for women to do. And it was only like in some time in the 80s that this sort of inversion took place that has never been fully kind of rectified. And she gave like interesting insights about it. why you may want to do something about it and what you can actually do about it. And it was a really cool talk and maybe we should have her on our show again to actually talk about that. So it was really great and I wanted to shout that out. So that's one pick. Another pick that I have, I don't know why, but I started rereading this really, let's call it, or oldie but goodie science fiction book called The Mote in God's Eye. It's by the author is, it was actually co-written by Larry Niven and Jerry Pornell, both of them well-known science fiction writers. And I'm enjoying it a lot. I think I read it like, I don't know, 20, 30 years ago. I don't remember it. So I'm reading it now and I recommend it So if you like that sort of thing if you like hard science fiction books, it's it's kind of slightly dated I mean like you you you run into futuristic technology. That's That's not as sophisticated as what we have now like smartphones They didn't consider the concept. So people are constantly like, you know with Star Wars where you know They constantly go to these like intercoms on the wall when they're in the ship and you're thinking, hey, why don't you just, you know, use something like a smartphone or whatever? But it's still a great book and I'm enjoying it. So I'd like to shout that out as well. And my third and final pick is as always, the ongoing war in Ukraine where the Ukrainians are, it seems are being successfully pushing back the Russians and the Russians are retaliating in what can only be called explicitly targeting civilian infrastructure and the actual civilians themselves, using rockets, using Iranian drones, like suicide drones that literally dive into people's houses and blow them up. So it's quite terrible. And again, anything that you can do to help, I urge you to do. And those would be my picks for today.
 
Steve:
And just for reference, the episode with Milan Weber was number 483 of JavaScript Cabra. We'll put that in the show notes. AJ, what do you have for picks?
 
Dan_Shappir:
You've mentioned it a couple of times. You might have sort of picked it. I don't remember. I think, I
 
Steve:
I
 
Dan_Shappir:
think
 
Steve:
don't remember
 
Dan_Shappir:
you kind
 
Steve:
it.
 
Dan_Shappir:
of. I remember you mentioning it. I, you have more perseverance than I do. I didn't, you know, neither the book nor the movies. I do recall that it was said that it was written as something like fan fiction for the Twilight series. I have no idea. It's, you know, anyway. I have to tell you that I have no patience for stuff like that. So for example, I tried to watch Rings of Power, and I made it into episode two or something, and I just can't. And I don't try to force myself. It's Tolkien fan fiction. I have no patience for Tolkien fan fiction, and that's that.
 
Steve:
So, AJ, I have a suggestion for you that incorporates some of what Dan just talked about and with one of your picks. You were talking about the safety vests, you know, that protect your chest. My son and I, okay, the chest and the back, my son and I are reading through the Lord of the Rings trilogy. And if you remember, this actually sort of goes back to the Hobbit actually. But my suggestion would be a coat of Mithril. You know, that's the armor was created by the dwarves and code of Mithril. It's incredibly valuable. It was given to Frodo and actually saves him when he gets jabbed by some orcs in the Mines of Moria. So if you can find some, I would suggest one of those because they seem to be very strong. Could be, could be. All right, so finally, my picks, my dad jokes of the week. So, you know, when I first moved into my house, we had some delays in getting internet. And so I had one neighbor who let me use their unsecured wireless for a while, which I needed for work. And I had some other neighbors similar, and I thought they were really good people too, but then they actually put a password on their wireless. They wouldn't let me share it anymore. OK, moving along. Fun fact. Did you know? I thought they were good people until they put a password on their wireless. They were being mean and not letting me use it anymore. Okay, moving on.
 
Dan_Shappir:
Yeah, yeah, do move on.
 
Steve:
Moving on. Did you know that t-shirt, you know we guys wear t-shirts, is actually short for Tyrannosaurus shirt, and that's because of the shorter arms.
 
Dan_Shappir:
Okay
 
Steve:
All right. And then finally, I was, you know, being Halloween, my son came up and asked me a question the other day and he says, dad, how do you cast spells? I said, why don't you just follow the instructions? He said, well, which instructions? I said, yeah, those ones.
 
Dan_Shappir:
Okay, okay. Yeah, I like that one.
 
Steve:
So that's it with picks and our episode of JavaScript Jabber. Hope you've enjoyed it. And we will talk to everybody next week.
 
Dan_Shappir:
Bye.
Album Art
Stories From The Trenches - JSJ 556
0:00
1:22:16
Playback Speed: