DevOps and NeuronSphere with Brian Greene - DevOps 165
Brian Greene is the CTO of Neuron Sphere. He begins by talking about his career progression and some of his achievements. He dives into designing medical devices from a developer's perspective. Additionally, he talks about his company and many more!
Special Guests:
Brian Greene
Show Notes
Brian Greene is the CTO of Neuron Sphere. He begins by talking about his career progression and some of his achievements. He dives into designing medical devices from a developer's perspective. Additionally, he talks about his company and many more!
Links
Socials
Picks
- Brian - Robot Framework
- Jillian - On Writing: A Memoir of the Craft: King, Stephen
- Jonathan - The Big Con
- Jonathan - Cup o' Go
Transcript
Jonathan:
Ladies and gentlemen, welcome to another exciting episode of Adventures in DevOps. I'm your co-host for the week, Jonathan Hall, and here in the studio I also have Jillian.
Jillian:
Hello.
Jonathan:
And we're excited to be meeting with our special guest today, Brian Green. Welcome, Brian.
Brian Greene:
Good morning.
Jonathan:
Would you tell us a little bit about who you are, what you do, why you know anything about DevOps, and maybe why you're here to talk to us?
Brian Greene:
Sure. My name is Brian Green. I'm the CTO of NeuronSphere. You can find us at neuronsphere.io. We are a DevEx. platform engineering toolkit focusing on data. And there's a huge DevOps component to that. I've spent my career building software teams across multiple data domains. And in every one of them, how do we do CI, CD? How do we continue DevOps automation has always been a real accelerator for quality. So that's why I'm here.
Jonathan:
Awesome. We were talking before we hit record about some of your past with medical devices, and that really sparked Jillian's interest, because she works in an adjacent industry with a lot of data science regarding genomics and stuff like that. Do you want to give us just a brief sort of history of how your career has advanced? What got you to where you are now?
Brian Greene:
Sure. The super whirlwind is it's kind of all been about data. I started with web apps, building transactional web apps and how do we get those deployed and tested and very quickly figured out that managers were. Managers were often talking to the guy in the corner who was writing all the SQL. And so I kind of go peek in over there and got into BI and, you know, global business intelligence is fun, but many of those tools lacked sufficient automation. Right. So I actually spent a lot of time building up kind of DevOps capability. This is back in remember subversion, right? So this is
Jonathan:
Oh
Brian Greene:
like
Jonathan:
yeah.
Brian Greene:
subversion with a SQL server analysis services and integration services and, you know, built a little. framework to do CI CD for those artifacts. Yeah, it was fun, but it was a little slow. And we'd made a big acquisition. So I moved into enterprise service bus kind of message oriented middleware, built a huge middleware system. We connected 250 endpoints in a couple of years internally across a huge bus. And we built, you know, canonical object registries and did a bunch of data lineage stuff. And With that view of that kind of ecosystem, I moved into enterprise architecture and platform rationalization, which is really interesting sort of theoretically. And financially has great implications, but it really, uh, it didn't scratch my like coding itch anymore. So in 2016 I got a great opportunity to, to go build from scratch at Oris Robotics. So Oris is a little company down in Redwood City. They were building a really neat surgical robot. surgical robots produce vast quantities of data.
Jonathan:
Mm-hmm.
Brian Greene:
And when I got there, they said, you know, we have vast quantities of data. Here it is in a pile. We gotta do something with it. And, you know, like you start doing estimation and you figure out that a vast percentage of the employees need access to that data, right? Huge percentages of business processes that you're running will be on that as a backbone. And so, started building an analytics practice there. And it's all about DevOps at that point because the results of the analytics are critical, right?
Jonathan:
Mm-hmm.
Brian Greene:
Like we're not experimenting in prod when we're talking about analyzing the performance of a robot. We need to trust things a lot more. And that actually leads to more aggressive testing, having multiple environments, doing real validation, lots and lots of automation. Johnson & Johnson acquired that company. It's a great exit. We had a great time in J&J. And then in fall of 2020, I guess, so about two and a half years ago, we said, hey, let's go build like a real platform engineering toolkit, you know, the kind of the thing we've been looking for. And again, a big piece of that is DevOps, a big piece of that is a deployment engine and artifact storage and dependency management, test automation management. So that's what we've been at for the last couple of years. targeting surgical robot companies and now are sort of moving into some other wearables and So it's been exciting so far
Jonathan:
interesting.
Jillian:
But those sound cool. What kind of devices are you targeting now? So you were mentioning earlier you have surgical robots, but I mean there's all kinds of medical devices. Is it all surgical robots or do you do any other robotic like lab equipment, microscopes, pathology
Brian Greene:
So we're looking,
Jillian:
equipment,
Brian Greene:
yeah, so we're
Jillian:
stuff like
Brian Greene:
open
Jillian:
that?
Brian Greene:
to all that. All of those are possibilities. And when we're actually looking at moving, I would have medical devices and we've had conversations about, do we move into nuclear? Do we move into pharma? Do we move into genomics? There's a lot of crossover in the regulated space, right? So a lot of, you know. A lot of med devices are very similar to all these other regulated spaces in that there's lots of laws that are going to tell you how you have to do things and then you interpret those laws into policies and then those things turn into like, how do we get things done? And so we feel like there's a lot of commonality, but we're pretty early, right? It's a good sized implementation to turn it on and sort of get it spread out across the company. So.
Jonathan:
So I'm trying to think how to tie this to our core audience. We can edit these mumblings out. Would you paint the picture a little bit what it looks like? What's different about software development from medical devices than, say, a SAS, or sort of a typical web API product? A lot of people work on something like that. What are the differences? You talked about extra automation, more testing environments. I can imagine if you're doing a robot. You can't just set up a mock and expect the test results to be meaningful. What are the differences from a developer's standpoint when working on these sorts of products?
Brian Greene:
Yeah, that's a great question. So, you know, it turns out to be a lot of control. We talk about V and V. And in the industry, you hear people talk about, oh, we're going to go through V and V. And as agile software people, it kind of freaks you out, because V and V is verification and validation. And it means these very specific legal things, like we have to have legal, we have to have written requirements. You have to have written requirements that say what the system does. And oftentimes there are, you know, one layer of user requirements and then a separate layer of how did the software implement them. Right. So there's kind of a many to many relationship there. You are required by law to have that written down for a medical device. Right. You are required to have things like what we call an FMEA. So failure mode and effects analysis. You have to prove that you followed a methodology to think about how is this thing going to break. So it's interesting, because in the cloud and in DevOps, we go, oh, we'll just assume everything is going to break, and architect around that. But we often don't right because it's really hard to do that effective You know what I mean? You sort of grow yourself to that with a medical device. That's not available, right? Like you are required by law to say look We have analyzed and indexed here all the possible things that could break or go wrong And in each one of those here is the effect. Here's we think the cause is here's the likelihood There are scoring matrices. There are layer after layer of documentation and control well above the software, right? So now when you actually get to the software, it's, you know, the better you can do things like automated testing, the better thing you can do things like traceability. You know, we talk a lot about, can you tie the testing of the software and automated testing back into your requirements matrix? And it's in the data space, we talk a lot about data lineage, but in device software and in controlled software development, a lot of the conversation
Jillian:
Thanks for
Brian Greene:
is
Jillian:
watching!
Brian Greene:
about requirements, lineage and requirements, traceability. And it's, you know, on one hand, crazy, boring. And on the other hand, absolutely essential if you're talking about software that can hurt people. Right. And
Jonathan:
Mm-hmm.
Brian Greene:
ultimately, with medical devices, lots of things are about as soon as you work in a quality controlled environment like that. you learn the understanding of what's called a risk assessment. So the risk assessment is more than, Hey, I'm going to flip this switch in prod. I'm going to call the other dev and say, Hey, what do you think's going to happen? I don't know. It's probably going to be okay. Right. And we flip it and then we stand there and watch me. Oh, look at work. That's not an appropriate risk assessment.
Jonathan:
Yeah.
Brian Greene:
Right. So, you know, a risk assessment includes, and it's, it's interesting if you, As long
Jillian:
Thank
Brian Greene:
as you enjoy
Jillian:
you.
Brian Greene:
writing things down, which I do, I think as a developer, there are lots of things that are not code that should be written down.
Jonathan:
Mm-hmm.
Brian Greene:
And I spent my early years, oh, the code is self-documenting, put it in the comments, whatever, it's just not true. There's lots of important documentation that's way outside the code. And in a lot of places, it's given short shrift or we don't know what to do. So you go ask a developer, where... Did you document this? And they put their hands in there. I don't even know where I would do that.
Jonathan:
Mm-hmm.
Brian Greene:
In device software companies, even when they're very small, they will already have a clear understanding of this kind of information gets documented over here. Here's how we have a global glossary. Here's how we do. If you're going to do a risk assessment, here's the standard procedure for how you should do that. Here's what you should document. It's very. And it sounds like really constraining. It's not. You can do lots of really impressive things in it, but it is a more methodical, kinda you gotta know the rules. And where that leads for DevOps is, really frankly, medical device companies, because of all these regulations, right? And then let's assume you get big. Now
Jonathan:
Mm-hmm.
Brian Greene:
it's not just all these regulations around quality and product stuff. You've got other controls, like Sarbanes-Oxley. I mean, we don't talk about that as much, and we've kind of backed off of it a little bit. But in really big companies, you've got all these other controls around separation of environments. So can a developer access production? If the answer is yes at all,
Jonathan:
Ha ha ha
Brian Greene:
then
Jonathan:
ha.
Brian Greene:
in a lot of audits, you just got a big old ding.
Jonathan:
Mm-hmm.
Brian Greene:
That's not appropriate. free access to production. If you have a really small team, you can justify it. You should have traceability. You should have unchangeable logs that prove, right? Like it should be a special, because administration of a running system, back to DevOps, right? Administration of a running system, monitoring of a running system, diagnostics, those are all operational concerns.
Jonathan:
Mm-hmm.
Brian Greene:
And somebody who's thinking about and performing development activities routinely, like, yes, you can wear both hats. But even then, this is where you get to, oh, well if you're doing operational stuff, you should definitely be using a different account. It should definitely be having all kinds of privileges set. And so you, you know, I hate to say it, but like this is all constraining and it turns into like organizations that'll get a little stodgy over time because it's hard to build all this infrastructure and procedure and blah, blah, blah, blah, blah. So the willingness and desire to change it. So like DevOps principles. continuous delivery specifically, right? Like in the cloud, I don't know, I guess it's been a challenge for the last decade or so to like get the quality people to let me do it. And
Jonathan:
Hehehe
Brian Greene:
really is understanding how, you know, we've got all these quality controls in place. I think DevOps and continuous integration and continuous delivery, particularly with cloud technologies. I think they get us higher quality and higher adherence to the controls, right? Like releasing. up through a QA environment every day and running automated integration tests that you can prove have traceability back to your requirements, that's a substantially higher quality in the releasing every couple months. Right? So there's a bunch of things in DevOps that I think enhance quality, and I've had pretty good luck over the last decade like really getting the quality folks to say, oh wow, that's a different way of doing it. but we can satisfy our controls better. Right, like we can get to better traceability, we can get to faster delivery. So much of it is, quit talking about automated testing and do it well,
Jonathan:
Mm-hmm.
Brian Greene:
right? Just like double down and actually invest in it a little bit. Quality people love you when you have good, right? So there's this nice, like if you really like good controls and automated testing, move into like medical devices or move into any of these highly regulated industries because the quality team there will give you huge kudos. Like automated testing is the only way to get any kind of velocity because you're required to prove testing for your releases, right? So if you don't wanna get into this cycle of we're gonna cut code and then we're gonna throw it over the wall to some QA folks. And I am not exaggerating when I say they're literally clicking through the app and leaving their physical initials step by step in a test script and those documents get scanned in so that you can prove who checked that script. Automating that? Everybody loves that, right? Like nobody wants to do this click through blah, blah, blah every time. But the only answer to it is what... devops folks, you know, really hardcore devops folks would tell you, it's just, oh, well, you have to do, yeah, CD. Oh, yeah. I'm sorry. I'm sorry. I'm sorry. I'm sorry. I'm
Jonathan:
Mm-hmm.
Jillian:
I find that really interesting though because
Brian Greene:
sorry.
Jillian:
I would think that you would get an awful lot of pushback on trying to get rid of some of the manual, the person, like actually there with their initial, initialing things because I know I haven't been in the clinic or in the field in some time. But I can't think of a situation where I would have wanted to have some new features, something on a medical device or on an intake form that was not approved by somebody else and like approved with a signature. And where I was the, so I mean it was a small team, but I was like a software person where I would have been like, yeah, sure, I'm gonna deploy new features and they're gonna go to the clinic and be a part of this study without having been verified by somebody else. So do you get a lot of pushback on that? Because that's the first thing that I thought of is like, who would, these people must be freaking out on you.
Brian Greene:
Yeah, so there's a couple of layers of pushback that you get. And, and one of them is logical. Right. And, and there's a real conversation to be had here. And I don't think. I like, I always love tugging on this thread a little bit. Right. So automated testing. And when I talk about automated testing, I I'm, I'm actually talking about like large
Jillian:
Cough
Brian Greene:
scale, we deploy the app. So the, so the other thing that this leads to is lots of environmental separation. So having dev test QA prod regression, etc. I think the ability to have multiple deployments of your app that are cleanly config controlled. Like I can tell you everything that's in that environment. I can tell you every change that's been made. Right. So, so there's a bunch of like guaranteed other capabilities that you need to have in place. So now I say, look, I can tell you the exact version of everything in QA. and we're gonna look at this test script and it's gonna run an integration test so it's gonna deploy, running against a deployed system and from a QA standpoint, you get this pushback, you do get pushback by the way. Like the pushback is immediate and it is always an automated test, can't find things that are wrong with the app that a person can't.
Jonathan:
Yeah.
Brian Greene:
true. What do you mean though? Like let's talk about that. They go, well, what often happens, often kind of in air quotes, right? What
Jonathan:
Yeah.
Brian Greene:
often happens is that as the tester is executing the test script, they'll see something else, right? And therein they have found a bug and your automated testing can't do that.
Jonathan:
Okay.
Brian Greene:
Totally true. Totally
Jonathan:
Yep.
Brian Greene:
true. But the thing that they found by definition also wasn't in the test script.
Jonathan:
Mm-hmm.
Brian Greene:
Right? So finding it has value, but do you need to run the known test script every time as the only way to verify all these other known things? Right? So the, the separating out. human beings doing exploratory things and looking for strange behaviors. Sorting that out separate from human being actually doing things that are easy to automate. Like once you start talking about, okay, well we can, you know, let's, how do we separate those? Because the reason we use computers is that they do things better than people.
Jonathan:
Mm-hmm.
Brian Greene:
And if you can look at this test script, and so particularly one of the things we used to do is have test scripts that take lots of screenshots. Oh, I hate this. It makes my skin crawl a little bit. But QA folks in controlled industries, they love this, right? Go to the next screen, take a screenshot, initial it. Go to the next screen, take a screenshot, initial it. Oh my God. Automating that so that you can show them, look, like here's the behavior, here's the thing it tested, and then if you want, you can look at the screenshot, tell me how a human doing this is better.
Jonathan:
Mm-hmm.
Brian Greene:
Right? I can give you, and so you want, you can look at every test report and it looks exactly like a human would have given you. Here's all the screenshots in order. Here's all the values we tested, etc. I'm not saying don't have the human go do exploratory things.
Jonathan:
Right.
Brian Greene:
But there's a huge and, you know, like. Also, QA, usually when they say, hey, we're going to go do automated testing, they hear you mean you're going to do some unit testing sometimes on your build servers or whatever. Right. Like, that's not like.
Jonathan:
Mm-hmm.
Brian Greene:
real testing, right? So really there's also this doing real integration testing of a deployed piece of software is very different than what a lot of developers are used to with lots of light integration. So there's a very different burden that you have to cross.
Jonathan:
Mm-hmm.
Brian Greene:
Then you start getting rid of the pushback, right? Then
Jonathan:
Mm-hmm.
Brian Greene:
it becomes QA actually flips, they become your ally and they go, oh, we get what you're doing. Can you go help these other teams do this? Right?
Jonathan:
Yeah.
Brian Greene:
But it-
Jillian:
Yeah, that's great. I think so much like the job of like DevOps maybe in particular, and engineers in general should be like translating things for the people who are actually using the software or the system. case with a device or what have you in a way that makes sense that's not all just like tech speak but like you said is here's this goal that you want to have and here's how we're getting there and here's how we're making it even better than the system that you had in place previously whether that was like a software system or a person taking screenshots and then initially come.
Brian Greene:
Well then, no go ahead, sorry.
Jonathan:
One thing I've heard from a large number of people, and you've addressed this, but I'd like to hear you just address it just head on. You hear the rhetoric that continuous delivery, continuous deployment, automation, CACD, all that stuff, that's great, but it won't work in my industry because it's regulated. Or more often even, it won't work in that industry that's regulated and they're pointing to some other industry they don't even know. And medical devices come up all the time. And you're not the first person I've heard say that the medical device industry really lives or dies on automation, but would you just address that? When somebody tells you, we can't do automation here because we're regulated, how do you respond to that? What's the answer?
Brian Greene:
Yeah, I mean, the definition of regulated often boils down to we like controls and computers are more reliable than humans. So the more things we automate with computers, the more right like like the more you controlled, you can be by definition. Therefore, this is more appropriate for regulated industries. Right. And the which is a, I don't know, kind of an abstract answer. Right. But it really is all about, everything is just about proving a state of control. Like all regulated industries kind of have this in common and really it's even beyond regulated industries and into things that are quality with a capital Q, right? Like ISO 9000 kind of stuff. Like all of these frameworks, huge amounts of it boil down to, do you know how to maintain a state of control? Do you know what's going on? Right, is the system? in the way that you put it? Do you know how you put it? Do you know when you make a change what you expect is going to happen? When you make one of those changes do you document that it did the thing you expected? Does it do that routinely? Or normally when you make a change does it go to hell in a hand basket? Do you have records where you know six releases in a row went to hell in a hand basket? So we had a meeting and we defined a corrective action plan. because this is just bonkers, right? Like the releases
Jonathan:
Yeah.
Brian Greene:
need to be better. So there's so much stuff you go, wow, that just sounds like what I'd like a good software delivery team to do. Right? It's just you have to do this, or you can't ship the code, or somebody might go to jail, or so. Like in some ways, it's an excuse to do all the cool DevOps-y stuff that you'd love to do. You just have to learn how to frame it as a, no, this, look, this lets us go faster. which equals cheaper dev cycles, produce higher quality code, less risk, sooner. And when you pitch those things in non-controlled environments, sometimes the answer is, okay, we don't care. Right, but
Jonathan:
Mm-hmm.
Brian Greene:
in controlled environments, you actually can get a better, you get a huge tailwind when quality says, we fully support that, we think you should go spend money on it.
Jonathan:
Mm-hmm.
Brian Greene:
It will make, it'll make the next audit go better. Nobody wants a bad on it.
Jonathan:
Just one more devil's advocate argument then. So suppose that this person says, okay, Brian, that's true, that sounds wonderful. However, our regulatory agency explicitly requires manual testing. Now, I've never seen an agency that does that, but I've heard people make this argument all the time. And maybe some agency like that does exist somewhere in the world under some jurisdiction. I don't know. Have you ever seen that? How often does this come up? How would you respond?
Brian Greene:
So, you know, I said something earlier, right? I said that, you know, the fun thing, and I mean, this is like all jobs, right? Like regulated industries aren't the only jobs where you have to follow the law. You have to follow the law everywhere. But in a lot of companies, man, there aren't many laws that are like super applicable that aren't relatively common. Regulated industries, we have lots more laws, and then you turn those laws, and this is critical. The laws are actually quite vague. The laws say things like, you have to prove that you tested your stuff. That's pretty vague. And you have to take that law and you have to turn it into a policy that says, we will test all our stuff as appropriate. And then you have to have another policy that says, here's how we define appropriate, right? Because we have
Jonathan:
Yeah.
Brian Greene:
to categorize the stuff that we're gonna build so that we know the appropriate level of testing we're gonna do, okay? Now, if some, I'm trying not to use like any pejorative terms, right? Like if some person who's not thinking futuristically. Ha ha rights of procedure that says We require manual testing Like they were to policy my company required whatever. Okay Here's the worst. I mean, it's not the worst. It's the best part, right? The best part of working in controlled industry and quality management systems is that it's insanely simple and people screw it up constantly You have to You have to say what you're going to do, and then you have to do what you were supposed to do, and then you have to show that you did what you were supposed to do. you are bound to your procedures. Whether your procedure is accurately reflective of the law or not is a subsidiary
Jonathan:
Mm-hmm.
Brian Greene:
conversation. In
Jonathan:
Right.
Brian Greene:
an audit, we need to understand that you have trained your employees to follow your rules. If you made stupid rules, like everything needs manually tested, well, if you don't do manual testing and those are the rules you have in place, you'll get burned in an audit. Even if you now you can show them the automated testing you can show them blah blah But if you don't have a memo from the quality team that says hey listen This group of people over here is not Following this testing procedure because they make this part of the cloud software and they have their own procedure for testing And here's where their documentation is located Now you're fine. You can do whatever you
Jonathan:
Mm-hmm.
Brian Greene:
want.
Jonathan:
Mm-hmm.
Brian Greene:
And so I've heard people claim that, like, oh, manual testing is required. And I'm like, no, somebody somewhere in your org
Jonathan:
Right.
Brian Greene:
years ago maybe wrote that down or whatever. But most day-to-day activities that are quality controlled in an organization are three levels removed from the actual law.
Jonathan:
Mm-hmm.
Brian Greene:
And usually five years. time-wise, right? Because you're gonna write these policies and procedures and things and then you You revise them as needed, but it's not like thrilling work, right? So you kind
Jonathan:
Right.
Brian Greene:
of leave them alone for and the laws and practices will actually change the FDA will release new guidance You know, there's all kinds of the FDA has guidance that they've recently released around If you're doing machine learning and medical devices, which like most of them are, you know If you go
Jonathan:
Mm-hmm.
Brian Greene:
to a med device pitch event all
Jonathan:
Yeah.
Brian Greene:
of them talk about, hey, we've got this cool new piece of hardware. And here's the data that the device produces and here's what we're going to do with it. And it's usually, here's the algorithm that we're running at the edge right now that gives us a little bit of market advantage. We think this is a device that does, and there's all these other, there's all this whole other matrix of how do you get the device into the market and what do you say it does versus what's it going to do in the future. You need to get a device onto the market. that is using and producing new data that has an algorithm out there. But, and this gets into MLOps and where does this cross over with DevOps? Because they're really deeply related at this point. That the whole ops cycle now of upgrading that model and doing regression testing on that model and capturing all the outputs and then pushing that new model back down to the device. Right? So that the inferences that it's making are better. that loop is kind of well defined by the FDA. Like they have clear expectations around that life cycle.
Jonathan:
Mm-hmm.
Brian Greene:
And usually they're technologically kind of behind, they're pretty up on this one. And if your device includes ML at the edge and you intend to do this, they now have a new pathway. Cause every, all these device companies had this annoying question, which was, okay, I put the device out there and the software is locked and changing it. used to, you know, changing it requires telling the FDA that you're changing it. Okay.
Jonathan:
Mm-hmm.
Brian Greene:
And so there's certain kinds of little changes that you can make and you can tell them one way and then there's big changes and you may have to prove new testing, right? So there's a real incentive to not change it, right? Because it's expensive to change, the device is working.
Jonathan:
Mm-hmm.
Brian Greene:
But when you get to machine learning, you get to these algorithms at the edge, the whole value prop here is that as we get more data about the device's behavior, we're going to be able to upgrade those algorithms.
Jonathan:
Mm-hmm.
Brian Greene:
And so now building the software itself around the algorithm, so you know this is a changeable part. And then part of your submission to the FDA is here is our sub plan for how we intend to use data to upgrade these new algorithms and push them back out to the edge. And again, you can either spend years doing it or you can use extensive DevOps automation. to actually get you through all of the work that's required to do that. And I love the MLOps branding, but if you peel that sticker off, it's usually just a huge DevOps platform. And I don't wanna be that harsh, right? There's some cool model tracing and tracking stuff that really is a separate layer, but it sits really heavy on what you're already doing.
Jonathan:
My flippin' toothbrush claims ML.
Brian Greene:
Yeah, I mean...
Jillian:
I might, you don't know. You never know.
Jonathan:
I mean, it has some sort of gyroscope in it. It detects the position I'm holding it in to tell me which part of my mouth I've brushed long enough. It's not very accurate, but it claims it uses ML to determine this stuff.
Jillian:
That would be great though. Imagine, I mean, if it does get accurate, like just the level of torture that I could put my children through about brushing their teeth properly
Jonathan:
Yeah, wait till
Jillian:
as
Jonathan:
it
Jillian:
opposed
Jonathan:
turns green.
Jillian:
to just making
Jonathan:
You
Jillian:
them
Jonathan:
can't
Jillian:
count.
Jonathan:
go yet.
Jillian:
Yeah, exactly,
Jonathan:
Yeah.
Jillian:
I know. You're not allowed to go anywhere.
Brian Greene:
One of the best inventions, we didn't need ML, it's just a toothbrush that plays a song. And you have to brush
Jillian:
Yeah, I
Brian Greene:
your
Jillian:
don't.
Brian Greene:
teeth until it's quiet.
Jonathan:
There you
Brian Greene:
That's
Jonathan:
go.
Jillian:
They have
Brian Greene:
a-
Jillian:
apps for that on the phone too, like, you know, different times to do things like wash your hands and brush your teeth and all that kind of thing. So maybe maybe the ML is like a little bit overkill and I just
Brian Greene:
I like that.
Jillian:
be
Brian Greene:
Yeah,
Jillian:
standing
Brian Greene:
I like
Jillian:
around
Brian Greene:
that.
Jillian:
with my kids and making sure they brush their teeth properly.
Jonathan:
So another question I have. I think anybody listening is familiar with the concept of a user story. And there's a lot of variations among user stories. Some are actual user stories as intended. Some are just task lists. But what does a user story look like when you are building a regulated medical robot? I mean, I don't imagine the user story is, as a surgery patient, I want to have I don't know, a mole removed or whatever the thing is that a robot does. I don't imagine that's the user story. What do user stories look like when you're talking about these devices?
Brian Greene:
Yeah, yeah, yeah, it's interesting because in the Agile community and the non-regulated community, we have this discussion all the time. In regulated devices, they solve this problem by writing a procedure that defines what they look like.
Jonathan:
Mm-hmm.
Brian Greene:
So each company will have a procedure about, sometimes multiple procedures, about how they will manage requirements. And you don't, like user stories. an interesting trendy expression, but it's like nowhere in that ecosystem that I'm aware of. They talk about requirements. Right. And like people talk about user stories because that's fluffier. And I remember when we started talking about user stories and blah, blah, blah. But like computer sciences talk about requirements and requirements. Management is an entire discipline. There's books on it. You got a master's degree in requirements management. Right. And so part of the answer is don't call it a user story because that leads
Jonathan:
Right?
Brian Greene:
you down the realm of weird debate and. call it a requirement and say that you should do requirements management and Google that and go, oh, well, hell, there's a whole like way to talk about this. And so they will talk about customer requirements and so customer input requirements. And it's
Jonathan:
Mm-hmm.
Brian Greene:
very similar, right? As a surgeon, I want to do this. I expect this result, but it's very crisp language. They'll have entire like rubrics for how to measure them. And like, how do you measure the output? What? How do you test this? If you have a requirement, it must be testable, right?
Jonathan:
Nice.
Brian Greene:
Then they will write. And this is one where, right, like you go down the road of like Jira's the devil and all task management tools are the devil, whatever. Subtasks are the devil, right? Like that's really where you get to because,
Jonathan:
Mm-hmm.
Brian Greene:
and I think most like device companies, most regulated companies, I think do a better job of this than almost all software teams. They only do a better job of it because they have to. And it's this, this simple idea. Oftentimes, I have user requirements that I can kind of one-to-one resolve to things a developer ought to do. But oftentimes, I don't. Oftentimes, the user requirement points to three things that the underlying system should do. And so what you often have is there is a layer of user requirements. And then there is a bigger layer of system requirements. And in the
Jonathan:
Mm-hmm.
Brian Greene:
system requirements, you'll find all the other, like, non-standard requirements around scalability. You'll find things like, like, the surgeon didn't say, hey, if there's a power outage, I expect the device to do X, Y, and Z. Like, they don't care about that, because it's not a clinical outcome.
Jonathan:
Mm-hmm.
Brian Greene:
Right? That doesn't help the patient. But
Jonathan:
Mm-hmm.
Brian Greene:
as a person delivering the system, You did an FMEA, you'd go back to that risk analysis, your risk analysis says what happens if the power goes off?
Jonathan:
Mm-hmm.
Brian Greene:
So now in the system requirements, I'm going to have all these extra requirements around. You have to test the thing for the power going off. You have a requirement that says it has to perform an orderly shutdown after 10 minutes of uninterruptible operation. Show me that you did that with the device. Prove it. And this is where you get to people signing forums that say, I did this test. So there's a lot of, there's tests that you really can't automate, right? But to your question, you know, air quotes, user stories are managed religiously, right? And there'll be one layer of user requirements, one layer of deeper sister requirements. And then behind that is when you'll actually break it down into air quotes, kind of stories that the developers are going to eat, right?
Jonathan:
Mm-hmm.
Brian Greene:
And that's, that's sort of the third layer. And then under there is where you'll often see people having the argument about subtasks, right?
Jonathan:
Right.
Brian Greene:
So they will, they manage this by really contemplating requirements management as a much more formal discipline and literally dedicating full-time staff to it. You go to most like startups and mid ranges and say, hey, we're gonna hire an analyst. As a matter of fact, we want like one analyst for every four developers and their only job is to write stuff. and they just think you're crazy, right? In a controlled industry, you can totally.
Jillian:
I think that's amazing. Like, I wish I was places where things like that happened, because they never do. And
Brian Greene:
So.
Jillian:
then half the team walks out because of. like whatever happened that half the team walks out and now nobody knows where anything is or what anything's doing or why any decisions were made. But yeah, just as a real quick aside, I think every software developer, engineer, DevOps engineer, like whatever kind of title we're giving it to, should go try to find a project where they have to consider things that exist outside the scope of like the software, outside the scope of these like abstract, you know, integrations of classes or whatever that we all have in our head. And you know, go have to consider things that happen in the physical world, like what happens when the power goes out, or what happens if this thing gets dropped, or what if it gets wet, or if the robot is having a bad day, all these kind of things, because it's just, it's always very important considerations that I find a lot of software people just haven't been exposed to enough, and I think it would be all good things for people to just be exposed to that kind of thing more.
Brian Greene:
The real world is really brutal.
Jillian:
It is.
Brian Greene:
The world of software is pretty safe, but when you start writing software, I spent some time writing, I wrote this integration software that sent jobs to industrial printers. I took this one-year deviation into industrial
Jonathan:
Hehehe
Brian Greene:
print control and some other stuff, and I had to go get certified as a printer operator. There was just no, and they were like, you're too dumb. to screw up first shift operations, like you're just gonna get in the way on the floor. And so we always start all the noobs on second shift. So you can come in on second shift and we'll have the folks train you. And then we've got a big printer that'll be down on second shift and you can come use your software and test on that. And I mean, I had made no progress at the legacy. It was brutal. Cause I cut some code and then I'd have to go send it to the folks on the floor and be like, hey, can you test it? And they're busy, they got jobs, right? Like this is a real machine that costs real money every time you test it. And it, and finally, I just like, okay, I guess I'm going to work second shift for a while and be a printer operator. And I would sit with my laptop on top of the thing, write code, build it, and then run a print job and physically take the fit. That was the beginning of me saying, Hey, listen, I got to, I got to write real software that interacts with the real world because it's just more fun. Um, there is a, there is something about that. There's something about like getting close to like where reality is and then trying to write code that deals with it.
Jonathan:
That's an interesting thought. Most software development that I've done, the worst consequence of failure is an error message. Or in some cases, it might be an erroneous credit card charge that your bank can reverse anyway. When the worst outcome is your surgery is botched, or the cargo crashes into the
Jillian:
People
Jonathan:
wall,
Jillian:
die,
Jonathan:
or
Jillian:
yeah.
Jonathan:
a rocket explodes, you know, it's a whole different ballgame.
Jillian:
Yeah,
Brian Greene:
Yeah.
Jillian:
my first interaction with that was when I was still in clinical research, and it was like the first clinical study that I was on. So why I was even in there is kind of beyond me. But one of the things that was done was when the patients would come in, they would have, like, you know, kind of all their vitals taken. And somebody just happened to be looking at, like, you know, the screen rolling by of. of like all the vitals being automatically taken by like our special machines and be like that person has really high blood pressure, they're going to have a heart attack. And if they hadn't like happened to have seen that, I don't really know what would have happened to that person and I don't like entirely really want to think about that, but there are very like real world consequences to some of these things or for tests that you know take a while to come back. So I was on a gestational diabetes test and you do like the blood tests and then they take some time to come back. You need to very quickly be able to contact the person who had that test result so they don't die. Like there's a lot of like just things that you have to think about when you're dealing with not just software but reality, like physical reality, especially if there's people involved.
Brian Greene:
Yeah, physical reality is, it makes it a lot more fun. You know what? So physical reality has changed my opinion about a couple of key ideas in software, even. Right? So Jonathan, if I told you, hey man, how should we assign catalog numbers to new objects that we're gonna create? Probably say, hey, like sequential numbers. Makes sense. What if I said, hey, we're gonna use catalog numbers that encoded in the catalog number are gonna be things like the length of a screw and the diameter, right? So like an 0308S is gonna be a three millimeter by eight millimeter slotted screw.
Jonathan:
Okay.
Brian Greene:
How would you feel about that as a numbering and naming convention?
Jonathan:
I wouldn't mind. I mean, I'm kind of using, I mean, so sort of the data modeler of me, in me, that, you know, I'm writing this SQL schema, for example, I just want a number. It's opaque as far as I'm concerned for the most part. So give me a number. It could be a UUID. It could be a, what you said. It could be a incrementing integer. It could be a fingerprint scanned as a, as a PDF. I don't care. You know, give me an opaque number. I'll put it in the database.
Brian Greene:
Right, right. So well, and they want, you know, they want strings, right? And we... On one hand, I totally agree, right? Like, I just need a unique key for this table. I need a unique business key. And then on the other hand, though, you ask things like, well, but I mean, don't you already store the length and the diameter somewhere else? Like, the name of the thing is slotted screw. Like, why do I, like, why are we doing this weird construction? And the answer is, when you have 50 or 80 different sizes of screws and they're all really little, and they're all in one display case, and you're in surgery, and you need one of a certain size, all of a sudden, having the size, and having the screw information in the number makes it more useful to the human.
Jonathan:
Right, I'm not gonna say that for sure.
Brian Greene:
Like having a number that's not, so there were multiple places across devices in a previous company where, You know, we'd have these whole discussions about, you know, like smart numbering and should we do it and how annoying it is. Because it causes all kinds of weird problems too, right? Like letting the users choose. Because what it really means is, hey, when we want to go make something, we're gonna go define what is essentially a set of regexes and we're gonna sort of try to protect that whole family of regexes. because
Jonathan:
Mm-hmm.
Brian Greene:
we don't know how many things we're gonna make and we don't know what size variations they are. But we don't, you know, and it might take five years before we exhaust all the numbers in this subsequence.
Jonathan:
Mm-hmm.
Brian Greene:
But we don't want anybody else to name stuff that matches this. So it produces all these other weird problems at scale. But
Jonathan:
Right.
Brian Greene:
like, as a data guy, you know, for 10 years or something, you just don't ever use smart numbers was an easy rule. Reality says that they're insanely useful in the real world and they that percolates all the way back to your ERP and your CRM
Jonathan:
Mm-hmm.
Jillian:
Yeah, it's
Jonathan:
I make
Jillian:
so
Jonathan:
it.
Jillian:
difficult to do that though, like that's how gene and chemical names are. And then recently there was like a funny article, chemical names are named sort of... like the name itself is supposed to be informative to the chemical, like the like the structure of the thing, except there's such a pain to figure out. And then, you know, and to do all that, that somebody wrote like a machine learning model where they're like, we're sick of doing this. We have, we have a new machine learning model and it'll get you your names most of the time. It was just, it was created out of like pure frustration. from naming that. I'll bet genes will have something, genes will have something similar at some point, I'm sure, if they don't already. But yeah, no smart names. It's all letters and numbers until, it's all, it's all ID strings.
Jonathan:
I make a similar point about version numbers of software. Version numbers are a human convenience, for the most part. There are exceptions. There are tools that require an incrementing version number, for example. But if you're just building, say, a monolithic web service, use the Git Shaw, unless your marketing team needs a version number, and then let them choose it. They want to call it SAS 2023. Let them call it that. That doesn't have to have anything to do with the version. of your code until you have dependency management tools that require some ver or some other schema of versioning, then you're going to need to do that. And it's always a pain in the butt because then you have to start tagging things according to your human readable versions and whatnot.
Brian Greene:
So I am. actually, so using
Jillian:
Thanks for watching!
Brian Greene:
versioning and having dependency management set up is on my list of minimum required things for a dev team.
Jonathan:
Mm-hmm.
Brian Greene:
All things will build with a version number from the first commit and we put a bunch of stuff in place so that that's essentially invisible as an engineer you can start a new module and it'll get versioned automatically and it'll get CICD'd and all this other stuff right so it's pretty pretty transparent but i'm i'm pretty religious about because it's hard to introduce that later Right? Like it's one of the things that as you break software apart, as soon as you try to break it apart, if you haven't been doing any kind of, so I agree now the other version number, which is the more important one is not semver or calver, but it's what we call Marver, which is the marketing version number.
Jonathan:
Yeah, yeah.
Brian Greene:
What are you calling this externally? Completely divorced from what the actual code says, right? Like go
Jonathan:
Exactly.
Brian Greene:
look at the history of. windows releases and then what actually would come out if you typed version at the command prompt across windows releases
Jonathan:
Mm-hmm.
Brian Greene:
because the internal kernel number is the NT kernel number and is radically different from the wind like I think they
Jonathan:
Exactly.
Brian Greene:
aligned them in the last release didn't they didn't
Jonathan:
I don't
Brian Greene:
they
Jonathan:
know.
Brian Greene:
like make a huge jump in the kernel is now like all of a sudden it jumped from like six to eleven or something but
Jonathan:
Could be, yeah.
Brian Greene:
anyway so yeah there's the whole What is marketing calling it versus what's the released build? So again, regulated software. You're not required to, but it's very common is if you're gonna say that you're in a complete state of control, version numbers are a big part of that, right? Like they're a big, unique identifier that will travel with the rest of your artifacts. So I've got a bunch of test results. Okay, great, what build number were they associated with? Right? Like the first
Jonathan:
Mm-hmm.
Brian Greene:
thing you're going to get, it's always, and so maybe that's why I'm so intent about it is that it, it version number becomes part of the artifact identification, which
Jonathan:
So
Brian Greene:
becomes.
Jonathan:
I agree with that, even so that my only distinction is that I think for many services, the Git Shaw is sufficient. When there's no dependency between services. If you're building a final artifact, say it's a web service that serves, I don't know, maybe it's Facebook's messenger service or something like that, right? That nobody ever sees a public version number, there's no dependency on it or whatever. But in that case, the only people ever consuming that version number are people looking at debug logs or debugging events or something like that. And they need to know which artifact caused that error message to happen. In that case, a Git Shaw is probably good enough. Now, if you want to say that you want to mandate version numbers everywhere consistently, I don't have a problem with that, as long as it's easy for the developers to do that. You know, the reason I push against version numbers... everywhere is because it often adds a lot of toil. But if you can get rid of that toil, I have no problem with it.
Brian Greene:
version numbers, so particularly we do, we have a combination of Calver and Semver. Turns
Jonathan:
Mm-hmm.
Brian Greene:
out I think they're both, using pieces of both of them is useful. It's probably one of the more interesting examples of where smart numbering is useful. And so we build it in out of the gate because the get shot is useful until I start trying to write that in documentation and read it and look at it and talk about it. and all of a sudden saying, hey, I'm looking at the March 22nd release. Right. Are you looking at the March 22nd release? Like all of a sudden that all that extra information, which is why we do a combination of Calvert and Semver and build them, right. Because all of that context. This is funny. Cause like two and a half years ago when we were starting Neuronsphere, we did huge, like, I don't know, multi-day religious debate about this exact
Jonathan:
Hehehehehehehehehehehehehehehehehehehehehehehehehehehehehehehehehehehehehehehehehehehehehehehehehehehehehehehehehehehehehehehehehehehehehehehehehehehehehehehehehehehehehehehehehehehehehehehehehehehehehehehehehehehehehehehehehehehehehehehehehehehehehehehehehehehehehehehehehehehehehehehehehehehehehehehehehehehehehehehehehehehehehehehehehehehehehehehehehehehehehehehehehehehehehehehehehehehehehehehehehehehehehehehehehehehehehehehehehehehehehehehehehehehehehehehehehehehehehehehehehehehehehehehehehehehehehehehehehehehehehehehehehehehehehehehehehehehehehehehehehehehehehehehehehehehehehehehehehehehehehehehehehehehehehehehehehehehehehehehehehehehehehehehehehehehehehehehehehehehehehehehehehehehehehehehehehehehehehehehehehehehehehehehehehehehehehehehehehehehehehehehehehehehehehehehehehehehehehehehehehehehehehehehehehehehehehehehehehehehehehehehehehehehehehehehehehehehehehehehehehehehehehe
Brian Greene:
subject. I'm like what is the because what is the because what is the version numbering scheme we're going to use. Period now.
Jillian:
It's all
Brian Greene:
I'm
Jillian:
latest.
Brian Greene:
I'm also a big.
Jonathan:
Yes, just use latest and call it done.
Jillian:
That's all latest, it doesn't matter.
Brian Greene:
Yeah, so I, yeah,
Jillian:
That
Brian Greene:
this
Jillian:
was
Brian Greene:
is...
Jillian:
a joke, internet, all right? Like,
Jonathan:
Ha
Jillian:
calm
Jonathan:
ha!
Jillian:
down.
Brian Greene:
Nah, Latest is fine most of the time. Just one dot star is usually going to get you
Jonathan:
Yeah,
Brian Greene:
what you
Jonathan:
yeah,
Brian Greene:
need with most
Jonathan:
right,
Brian Greene:
libraries.
Jonathan:
right.
Jillian:
Yeah, it's all 0.0-dev or latest,
Jonathan:
And
Jillian:
and that's
Jonathan:
then
Jillian:
it.
Jonathan:
QA says, you have this bug, and you say in which version? And they say latest, and you just go find it, and you're done, you know?
Brian Greene:
stuff.
Jillian:
That's it.
Brian Greene:
So when we, you know, delve a little bit into sort of like the how do we how do we build things, right? So we take a pretty, we take a very aggressive poly repo approach, right? So literally, you know, multiple get repos is my assumption, multiple subversion repos multiple, I don't care what's your back end, right? Like building software as a set of related components from the very moment you start. And again, maybe it's hard. If you remove the toil, I like your expression, if you remove the toil, the developers don't even notice this. So we, no matter what kind of technology you're trying to build, all of our repos will track version in the exact same way with a version file that's in the exact same place in the repo and all of our... tooling uses it really pervasive. And so we, I guess, attacking dependency management as a very, very first-class citizen maybe was kind of one of the things we went after when we started building our toolkit. Because it's hard, and if solved well, I think you can scale a software team much, much farther, sort of easier. But you're right, you have to get it completely out of the developers. Like most... You can go download the Neuronsphere Toolkit and then HMD repo create and go create all kinds of stuff. And like until you need to really use dependency management, you won't even notice that it's doing all this stuff behind the scenes. So I don't know. I suppose, I suppose I have to agree with you. It's really annoying to do.
Jonathan:
Yeah. Until it's not. Until you solve the problem, right?
Brian Greene:
Yeah, until you say like I'm the user story says as a dev manager Brian requires this and finds
Jonathan:
Hehehe
Brian Greene:
it annoying So the you know, so we satisfy that with what would solve it pervasively and don't think about it anymore
Jonathan:
So let's talk more about Neuronsphere and the products you guys produce and sell. Just give us the introduction.
Brian Greene:
Sure. I'm really latched on to, I like, this platform engineering terminology, I think, a couple of years ago when we started, people weren't necessarily using that. For me, it's just the implication that there's a toolkit here and that you're trying to build a coherent thing across the series of really, how do I say, old disciplines now? But I mean, we've been doing DevOps-y stuff for like 10 years, right? Like, we've been monitoring systems for like 30 years or whatever. There's a lot of stuff in there that is just a conglomeration of what we've done before, but trying to coherently focus on developer experience and how to unify that, that's kind of been a thing. So NeuronSphere starts with a series of CLI tools and a philosophy about how to build scaling software. So I said version numbering and dependency management is important. If you go use any of the NeuronSphere tools, They will create repositories that have version numbers and a metadata file so that you can track runtime dependency management. So we do dependency management at runtime and that's how we solve it. So a big challenge in cloud deployment, so a sort of switch or CICD kind of things. I'm gonna go deploy this thing. If I'm deploying lots of smaller modules in these smaller repos, I need a way to do dependency management. such that my deployment engine can check and make sure that everything's okay before I push my changes,
Jonathan:
Mm-hmm.
Brian Greene:
right? And as an engineer, I want the ability to say, hey, listen, here are the dependencies that I want in the target platform. Like if I don't have a VPC, I can't deploy. If I don't have a whatever, I can't deploy, right? And
Jonathan:
Mm-hmm.
Brian Greene:
right at that moment, you hit version numbering because you say, hey, listen, I'm gonna go deploy a microservice. And a microservice requires a VPC named main at version number what or greater? Because that, right, like immediately we can't just say, hey look, you know, any old version of that thing will probably work, that's not true, right? Like you built your module against a known set of dependencies. And again, it comes to state of control, and do you know what you're building and can control it? Because. I'm going to use this dependency management. So NeuronSphere's deployment engine and the way the repos are structured, we'll use this to build change sets that it is deploying across multiple environments. So you have a developer, they do their work in dev. We have a, actually we have a video about this on our YouTube channel. So you can go to the NeuronSphere channel on YouTube and there's a video on there called like drift detection between multiple environments. And this is a common like Common problem, developer goes, they wanna build some isolated stuff. They wanna test it and make sure that it works. So I wanna go test this in an ephemeral, isolated environment that's mine. And just to do that, I kinda want dependency management because I don't want the entire stack. I don't want the whole solution. But I know that I need a few pieces at certain. So grab me those pieces and their dependencies, bring them down. Let me do some work over here. Create some new versions of some modules. and now I'm going to submit those for the pipeline. So they're going to go into Dev, and now that they're in Dev, we're going to look at the deployed modules. Again, these are standard-shaped repositories. We're going to go all the way back to what I said about integration testing, so we want to integration test what we just deployed. The DevOps space, I feel really strongly about this. We have all these discussions about, should we have multiple environments and how often should we do deployments, etc. If you're doing infrastructure as code, right? Then deploying the infrastructure as code to an environment, like deploying it, seeing if you can apply the change successfully is literally the only way to test it. So the more often you do that, the more confident you can feel in your infrastructure as code. Right? It's super funny to me. It's like, oh, it's the infrastructure layer. So we like... You know, we save up the changes for a couple months at a time, and then we occasionally do a Terraform plan, and it surprises
Jonathan:
Yeah.
Brian Greene:
us. Like, that's bananas, right? And so, we kind of combat this. We have a lot of tooling around building small repos that do nice little things. Then you do dependency management on top of those, you have a deployment engine. So now we can deploy all this stuff to dev, and we're going to look in each one of those repos because any one of those repos can kind of advertise that it has integration tests. And you should be able to run those locally. You should be able to run those in your developer, isolated environment. Yep, my integration tests are good and they're gonna work. Okay, well now the deployment engine is gonna run them automatically in dev. And we're gonna do things like stamp the build. So this is where version numbering comes in. We're gonna automatically stamp the version number and the build number and the environment information. We're gonna stamp all this trace information into the test results and we're gonna save it in a central location. We're going to relate it in the graph database to the artifacts that were deployed. We're going to build all this crazy traceability about, okay, we know exactly what you did in your tests past and death. Now we're going to deploy that same change set to test. Now we're going to run the tests again. And this is hilarious. We're going to run the same tests again. Like this is the default. You don't have to do this. Just doing this, you'd be blown away by how many bugs you find. I'm going to just try to deploy the same stuff again to the test environment. I'm going to try to run the exact same tests. Yeah, shocking how many times that doesn't work. We're not even at QA yet. We're still just deploying one module. Damn it, I forgot. Fill in the blank detail, right? Okay,
Jonathan:
Mm-hmm.
Brian Greene:
so the engine is going to go. It's going to run your tests. Neuron Sphere engine says, you know what? You passed. Now, we talked about CI-CD. and industries and like can you do continuous delivery? So let's talk about continuous delivery a little bit. Because everything I just described was automated and it was super cool and it feels like very DevOps-y. If you want, you can keep going. You can auto deploy to QA, you can do what you want. You can auto deploy to prod, you can auto deploy to DR. Have a nice day, the Neurons for deployment engine, very flexible, happily build that for you. What most people want is a combination of continuous integration. continuous delivery and deployment through lower environments. And then it's a little bit like a reverse mullet, right? So it's kind of party in the front, but waterfall at the back.
Jonathan:
Ha ha ha ha ha ha.
Brian Greene:
Because in the beginning, you want all this automation as fast as you know it's really it's not fast and loose. You're doing lots of good stuff around controls. But now we've got like six or seven different changes that we've gotten into test. And we've tested them all because the integration tests try to test them in isolation as best they can. But they're still interdependencies.
Jonathan:
Mm-hmm.
Brian Greene:
And this is critical. We kind of want a snapshot of what change set we're going to move into QA. And I want to be able to deploy that exact same change set into prod and dr later. And now we're going to introduce a time delay. Because I've got a function, actually. Our deployment service says, give me the delta and build me the change set between any two known environments. And then we'll use that as our starting point to decide what we want to move. And so you do this drift detection passing things in test, the current state in QA, that gives you a chain set. That's the chain set that's the trackable release that you're kind of interested in. Now your QA people are gonna pay attention, and what is it? It's a list of modules and a list of version numbers. I'm
Jonathan:
Mm-hmm.
Brian Greene:
gonna apply this set to the known set of the list of modules and list of version numbers. I'm going to apply it, and then I'm going to take a version numbered set of integration tests, and I'm going to run it against that in QA. and it's going to produce those same results and I'm going to store them, now I can make a determination. I really like this exact combination, okay? Because I'm not doing the continuous, everybody gets to throw their code in the back end of the pipe. That's great, but at some point, like somebody says you got to slow it down a little bit, like we don't do production releases every day, we only do them on Wednesdays or whatever, like as much as you want to do continuous delivery, hardcore
Jonathan:
Mm-hmm.
Brian Greene:
all the way to fraud, it's really hard to get people to let you to do, right? You can do it with little services that they don't care as much about.
Jonathan:
Mm-hmm.
Brian Greene:
But the main ones, OK, now I've got this change set. OK, roll that to prod when we want to.
Jonathan:
Right.
Brian Greene:
And go roll that one to DR. And there's conversations we can have about how to warm spare, sort of modify the change set to warm spare on the infra layer and whatever. But that is the, so we provide a set of CLIs and project templates and really a development methodology to go along with how to build software that will do that whole stack. kind of out of the box. You know, we work against AWS right now. We'll be expanding platforms at some point. But you can go pip install Neuronsphere and get our local tools. Oh, sorry. Get all of our local tools and start doing development and sort of project templating. There's a whole other layer inside there. We have a code generator and a documentation as code tool that we also integrate. So you know. regulated environments, I think is a general good practice for developers to do documentation. So in our YouTube videos, you'll see us always switch over to the documentation and it's generated documentation that comes out of source control repositories. That gets built as part of the CI DCD pipeline too, right? So you can always find the right version of the documentation to go with the tests, to go with what's deployed where.
Jonathan:
Mm-hmm.
Brian Greene:
So we supply a command line tool for that and a bunch of utilities. It's really a, it's the minimum viable toolkit I think a developer needs and a development team needs to like be successful. And it starts with a bunch of CI CD and a bunch of things about version control and a bunch of things about documentation and standards and testability. And you can go look at the NeuronSphere code today. We have about 250 repositories.
Jonathan:
Okay.
Brian Greene:
And if you tell an engineer who's not touched one of them, Hey, go look in such and such CLI and make a change. We did this kind of as an experiment the other day. I said, hey, Alex, take me over to this other CLI. I want the following changes. It took like 12 minutes to figure out where to make the changes.
Jonathan:
Okay.
Brian Greene:
Because we have a dozen CLIs, but they all look like the same engineer wrote them,
Jonathan:
Okay.
Brian Greene:
right? We have 26 microservices and they all look like the same engineer wrote it, right? Like, because we use all these tools that make this poly repo development, experience very, very, very consistent automated. It also means the developers have lots of portability around the stack. And so you can move into a different piece of it because we assemble all software as a collection of those repos at version. So it's literally dependency management for all of the things that you build. If that's a collection of superset dashboards, those go in a repository, they will have a version number. You can set a dependency between those. and your Airflow DAGs, which are in a repository with a version that right like so you can deploy those. And then there's the infrastructure to continue to extend it and add new tools. So it's a kind of a toolkit. We we think
Jillian:
You're
Brian Greene:
we're
Jillian:
speaking
Brian Greene:
the
Jillian:
my language here with the air flow DAGs and the superset and I
Brian Greene:
Yeah.
Jillian:
feel like I could just harass you about that for
Jonathan:
Hehehe
Jillian:
quite a while but
Brian Greene:
Yeah, so we talk
Jillian:
I
Brian Greene:
about
Jillian:
don't think
Brian Greene:
platform
Jillian:
I will today.
Brian Greene:
engineering for data because really, you know, a big piece of it is how do you take DevOps principles? How do you do CI-CD and aggressive testing and apply it to the data infrastructure space where the tooling is notoriously bad at it? You know, I started saying, Hey, I caught this bug many years ago, writing PowerShell script wrappers so I could do CI-CD with SQL server stuff because their development team had never thought about like rational deployment, apparently. But that's never stopped. Most of the data tools don't do great at participating in this ecosystem. And so building kind of an underlying substrate, because everybody's got an integration problem now. And it's not an integration problem between platforms. It's an integration problem of all the tools that you need to try to get together. Like how many tools do I need to get together to have QA and product? If I just wanted two environment, right? And whose job is that? And I think it's the platform engineering team. I think it's the, you know, whoever that is, that some traffic cop that is saying, look, we're the ones that are helping you create multiple environments. But what that means is you need a toolkit to adapt all these tools that don't do a good job of it. Or that have a really weird opinion about how you should do it. And really in your own spheres, look, all the tools that do DevTest QA prod, as an engineer, you should have a very similar experience, whether it's Databricks or Airflow. or whatever. So what is that like glue layer as a platform team that you use to onboard new tools so that they will quickly behave well and meet your minimum requirements for participation in your larger DevOps and platform ecosystem? How do you add dependency management to all these tools that inherently don't have it? So this is what we use. We also. We. I don't know if we're the only SaaS that does this. I always like to raise my eyebrow. Whenever anybody says, we're the only one who does this, I always raise my eyebrow and go, really, it's a big world, right?
Jonathan:
Yeah.
Brian Greene:
We use Neuronsphere
Jonathan:
Thanks for watching!
Brian Greene:
to build Neuronsphere. We use our own DevOps platform, we use our own deployment engine, we use our own testing engine, we use our own CLIs. It is 100% dog fooded. You can download all the source code and you would only use Neuronsphere tools to completely rebuild it. Um, and so as a, like, can we make life better for developers? I don't know. We, we will make it as good as we make it for our own developers, but they use it to produce the entire, you know, to produce the rest of the thing.
Jonathan:
Mm-hmm.
Brian Greene:
And we're pretty lazy. Right.
Jonathan:
The best developers always are, aren't they?
Brian Greene:
What was it? The best developers are Sloth, Hubris,
Jillian:
hubris.
Brian Greene:
and
Jillian:
I think
Brian Greene:
I don't
Jillian:
laziness.
Brian Greene:
know, too lazy to figure out the third one.
Jonathan:
of JetGPT nose.
Jillian:
That's the Lincolnstein quote, right? Okay,
Brian Greene:
Now it's Larry Wall, the inventor of Perl.
Jillian:
okay, yeah,
Brian Greene:
Best
Jillian:
Lincolnstein
Jonathan:
Oh.
Brian Greene:
software developers
Jillian:
is another pearl
Brian Greene:
are
Jillian:
guy.
Brian Greene:
sloth.
Jonathan:
Laziness and patience and hubris, is that
Brian Greene:
Impatience,
Jonathan:
it?
Brian Greene:
impatience is it. Yep. And hubris,
Jillian:
I
Brian Greene:
right?
Jillian:
mean
Brian Greene:
Like
Jillian:
that fits anyways.
Brian Greene:
the belief that I can solve this problem, right? Like the belief that I could take on this problem of this size coupled with an unwillingness to write much code and sort of an impatient, you know, they wanted to be done yesterday. I think it- You're right, it's difficult when you say that to HR, right? Like how do you judge your engineers? I really look for the lazy ones. You think that's not, you don't put that in the performance review, but
Jonathan:
Hehehe
Brian Greene:
behind the scenes really it's the, how do I write the least amount of code to affect the greatest amount of change?
Jillian:
Yeah, the ones who decide, like, I am sick of this and this is being changed now and like just, you know, go do it.
Brian Greene:
Yep. Yep.
Jonathan:
Well, we've been talking for more than an hour. I've enjoyed this conversation. You and I, Brian, met a few weeks ago and I enjoyed that conversation equally. So you're always fun to talk to. But we probably should wrap this up. Any last comments you'd like to make? Anything you should have talked about that we failed to before we move on to picks?
Brian Greene:
and I think some great discussion.
Jonathan:
Okay, awesome. Well then, I suppose we should do some picks. Jillian, do you have anything ready or should I go first?
Jillian:
Uh, I do. I've been reading a book by Stephen King called On Writing, which is kind of half writing advice from Stephen King and half memoir from Stephen King. If you do not know anything about him, I don't want to spoil the book. He's a very interesting character. He's also an author from the part of the world where I'm from. He's from Maine and I'm from New Hampshire, which is, they're very close. So, oh, that was just, it's been a very interesting read. Stephen King is quite the character. So,
Jonathan:
And
Jillian:
that's
Jonathan:
he invents
Jillian:
my pick.
Jonathan:
a lot of great characters too, doesn't he?
Jillian:
He does, yeah, he's, I forget how much of like money produced by the writing industry goes to Stephen King,
Jonathan:
Ha
Jillian:
but
Jonathan:
ha
Jillian:
he also
Jonathan:
ha.
Jillian:
has books and things. I would not recommend reading his books and then driving around at night in any rural area anywhere, but especially
Jonathan:
Ha ha!
Jillian:
any rural areas in Maine. Like, I'm just, I'm just gonna throw that one out there.
Jonathan:
I'll
Brian Greene:
Thank
Jonathan:
save
Brian Greene:
you.
Jonathan:
those books for not my next main road trip.
Jillian:
know when you're on like a nice bright sunny beach somewhere where nothing can no pet cemetery can jump out at you.
Jonathan:
Well, I have two picks for the week. I'm gonna pick a book also. I just finished
Jillian:
BOOM
Jonathan:
listening to the audio book version of a book by Mariana Matucato. I think that's how you say it. It's called The Big Con. Subtitle is, how the consulting industry weakens our businesses and infantilizes our government and warps our economies.
Jillian:
I don't
Jonathan:
And
Jillian:
know
Jonathan:
it-
Jillian:
about this pic, Jonathan, I'm feeling personally attacked over here.
Brian Greene:
Actually,
Jonathan:
You know?
Brian Greene:
I feel like I need a link to this. I feel like an affinity already with this book just based on that.
Jonathan:
I think I saw it recommended on LinkedIn or something, but it talks about how some of the really big multinational consultancies, you know, McKinsey, Boston Consulting Group, some of these organizations frequently kind of are, well, it's in the title, con organizations, and how they often... perpetrate a conflict of interest, especially when working for government agencies. You know, they talk about the COVID response and military consulting and stuff like that, how they kind of take on, they're literally in some cases, advising the government to buy services from their own clients and other ridiculous things. So it's not a treatise against consulting as a profession. It's really about specific ways of consulting that are... are problematic. So I consider myself a consultant also. So I felt personally attacked a couple of times, but I don't know. I think we get to... Those of us who are consultants could take this as a warning of what not to do. My second pick is a little more lighthearted. I run another podcast that started earlier this year called Cup of Go, which is about the Go programming language. It's a weekly news program about what's new in the Go community. And we've recently started selling merchandise. Just today, my Cup of Go cup arrived in the mail,
Jillian:
I was
Jonathan:
and
Jillian:
gonna
Jonathan:
so I'm so
Jillian:
comment
Jonathan:
excited.
Jillian:
on your mug, it's really cute.
Jonathan:
It's so cute. I love it. If you're watching the video, you can see it. If you're listening, you'll have to go to cupofgo.dev to see what it looks like. But that's my second pick. These cups are just cute. They're 20 bucks, including shipping worldwide, so. Brian, you have a pick for us?
Brian Greene:
I'm gonna give you a pick. It's a tool, because I'm a tool guy about half the time. I mean, first tool the band. Like if you don't listen to tool, you just definitely listen to
Jonathan:
Yeah.
Brian Greene:
tool. But no, in the tooling space, and I found this one a number of years ago, but I swear everybody I introduce, everybody I bring it up to, I've never heard of it, there's a tool called Robot Framework. And Robot Framework is a test automation tool. It is not. browser test automation tool right like it's not like a selenium competitor
Jonathan:
Mm-hmm.
Brian Greene:
it's sort of a level above that I You know, as a person who's like been through multiple programming languages, I love frameworks, I love tools. Really, this one has been incredibly impressive. When I talk a lot about, you know, doing integration testing as part of your delivery process, I'm always doing it on the assumption that you're using robot framework because it makes it like useful and easy. It produces, you can use it to produce test output that reads like a story and it like. QA loves and end users love really. Yeah, when I talk about like how to make testing valuable and how to really do this at scale It's always because robot framework is behind it. So it's a testing tool. Don't be scared It's it's insanely extensible if you're into Python It's really extensible if you just a tiny sprinkle of Python but really I it's like a DSL making tool for testing and There's a global community. They have a global user conference for robot framework every year Right? All
Jonathan:
Nice.
Brian Greene:
the QA nerds all get together. But if you're looking at different ways to do testing or thinking about how to bang on systems, whatever robot framework, I really can't recommend it enough.
Jonathan:
Awesome. I'll check it out.
Brian Greene:
Totally open source.
Jillian:
Yeah, that looks cool.
Jonathan:
Very cool. Well, Brian, Jillian, this has been fun. Thanks everyone for listening.
Jillian:
Good heads-up.
Jonathan:
Hope to
Brian Greene:
Thank
Jonathan:
see
Brian Greene:
you,
Jonathan:
you here
Brian Greene:
Mark.
Jonathan:
next week for another adventure.
DevOps and NeuronSphere with Brian Greene - DevOps 165
0:00
Playback Speed: