Developer Workshop: A deep dive into IDQL and Hexa [webinar transcript]
On demand webinar: “Developer Workshop: A deep dive into IDQL and Hexa”
Behind-the-scenes of the new identity standard and open source software used to unify policy.
[00:00:00] Mark Callahan: Welcome to our webinar this morning. That’s entitled a developer workshop. It’s a deep dive into IDQL and Hexa. This is a follow on session after our panel webinar that we did two or three weeks ago when we first publicly introduced IDQL and Hexa, and we realized that conversation was very much focused on, just that a conversation we really didn’t get into the, behind the scenes of how the code worked and, and what all was involved with it. And, and due to of course, interest in how all that looks. We wanted to share that with you today.
So without further ado, I’d like to introduce the speakers that have joined us. I’ll be your host in MC my name’s Mark Callahan. I’m the head of product marketing here at Strata. We also have Gerry Gebel joining us. Who’s the head of standards at Strata Identity. We have Neil Danilowicz, who is the principal architect at Versa Networks and MEF member and co-editor. Mike Barinek who’s the lead developer and co-founder of Initial Capacity.
And we have a couple of additional guests who are developers, who have actually worked with the Hexa software itself. That’s Eli Friedman and Hunter Gilane, who are going to be joining us as we talk through how we built, what we built.
And, and on that note, what I’d like to start with is a little bit of why we built it. I know that we spoke about this on our initial panel webinar previously, but before we get into how we built everything and, and how it works. Maybe we’ll take a step back and look at the whys. What were the market conditions? What was the need? Where were the shortcomings of other existing standards that led us to create Hexa and IDQL? And so with that, Gerry, I’d love to turn it over to you for a little bit of a quick background.
[00:01:36] Gerry Gebel: Sure. That sounds good, mark. Thanks very much. And yeah, I’ll set some contacts and background here before we get into the code base and, and have Mike take over from there.
But, the Environment that we found ourselves in, in talking to customers was the challenge of all of access policies being fragmented across the cloud landscape. What we call it, the east to west vector that is Google, Azure, and Amazon all have their own way of defining and managing access policies within their environments.
Then of course, this is compounded when you look up and down the stack, that is from the application to the platform, the data and the network layers, you know, each of those layers have their own technologies and their own ways to define policies. So that’s, you know, that’s a massive problem and challenge of complexity for organizations to manage.
So that’s why we came up with IDQL and Hexa I’d like to say. Two sides of a coin. IDQL is the declarative policy format where you can define access policies and rules. And Hexa is the open-source software that brings this to life that makes it operational. And we’ll see here in a moment, as I do a demo, how it performs these three main functions of discovery, that is it for any integration or connector to a targeted environment that we have, that discovery process goes out there and looks to see what resources are associated.
Policies are defined and then it can translate them now into the IDQL format. So, you can have one place, one uniform way to express policies for all of these target systems that you have under your management umbrella. And now I can, I have one place to make updates or do other management functions. And if I have changes to propagate out to the target environments, we go back through that translation function and use orchestration to activate it in the target systems.
So that’s the sort of thing that I’ll demonstrate here momentarily. So, let’s switch from the slides to something else here. So, this is the project website. Hexa. orchestration.org. So, it has the same graphic here and lots of other information on, on the page. If you go all the way down to the bottom there’s a form to fill in, for example, to join the working group.
And we’d love to have you join and help us progress the standard and, and the open source. But there’s also a link here to our GitHub repo. And this is interesting because the demo I’m about to show you can download and get it running yourself. So, you just go here to download the code base.
And if you look in the read, me and Mike will cover more of this later. but there’s only three prerequisites that you need. To run the demo that I’m about to do, go unpack in Docker. And then it’s really just a matter of unzipping that code base that you filed that you downloaded, build the software image and then use Docker to get things up and running.
And it tells you where, where things are running and what ports and so on. So let’s take a look at some of those components that come in the demo environment. And the idea here is that we wanted to have a package of the software packaged up so that almost anyone could download. In literally five to 10 minutes and have a working system up and running, and then you can, you know, do your magic and make extensions or, or contributions from there.
But the environment includes this policy administration, and consults as a sample UI for that. And we include this demo application. We call Hexa industries and. We’ll get into the functionality here, but this is to control the access to these tabs on the left used as OPA, Open Policy Agent, to control. If you can see what if you look on the upper right, you’ll notice that I’m logged in as a sales person or within the sales group.
And so I can see the dashboard. I have access to the sales tab. also to the accounting tab. that doesn’t sound right. Maybe we should fix that. If I’m in sales, I probably should not see accounting…
[00:06:15] Mark Callahan: For sure. For sure.
[00:06:16] Gerry Gebel: For sure. Right. And if I click on HR, I am not authorized to see that. So from our policy admin UI, we can connect to different environments and do discovery. So if we connect to this OPA system, I just need to give it a configuration file to show where I can find the OPA bundle server. So I just click on that file and hit open, and you can see now I’ve got an open policy agent bundle. Listed here. Now, if I click on applications, I get more detail about this application.
And at first we show you a, just a display window at the top. We format the JSON nicely, so you can see the subject in action, object and so on. And sure enough, if we look at the Accounting resource, excuse me, we can see the Sales has access now down below is the JSON format of this. I’ll make it a little larger.
And what we can do is just go in and edit that we have a simple edit capability, and we’ll just scroll down to the resource listing for accounting here. That is sorry. And here’s the sales group. So we’ll just remove that. Save it. And now what the orchestrator, the Hexa orchestrator, is doing is rebuilding that bundle so that when we go back to the application we still have access to the dashboard.
Great. We still have access to sales as we should. And now we’ve corrected this rule, so we no longer have access to the accounting tab and we are still denied access to the HR system. So that’s a great quick demo. And just to further illustrate all of this is running on my local system. Maybe you could see that from the URLs I was using.
So I’ve just got a number of containers out here in Docker for the orchestrator, the admin The app that you just saw the demo app, the Hexa industries we’ve got an open server out here. So all of these components are, you know, downloaded and installed through that process. So it’s super easy, super fast to do that.
[00:08:42] Mark Callahan: And when you showed that user interface, Gerry, we talked about one of the benefits of IDQL being that. human readable is the declarative language. This is exactly what we’re getting at, but the subject action object, as, as we look at this, is that correct?
[00:08:55] Gerry Gebel: That’s right. You know, we try to simplify it further for those that don’t read JS natively.
we just showed you this display window here at the top now, now, Mark. So this was for an OPA based application. I’ve also got. Another app here, another demo app, this one’s not included with the package on the repo, but it’s another demo app that we use and this one is running on GCP. So, as you can see from the URL, it’s not local on my machine.
And This one, we use to show some different kinds of functionality or, or how we can manage a different kind of application. And here we look at the HR part of the portal and see that Canary bank, a fictitious and not yet famous multinational financial institution with operations in the US, the UK and EU, and you can see at the moment I have access to all three regions. However Brexit did happen. So maybe I should not have access to the UK region. So let’s go back to discovery and connect to that GCP based app. We follow the same process. In this case, we give it a private key file because we need to connect to that project within GCP, down below. Here is an example of what that looks like.
I’ve got one on my desktop load that hit install, and now we’ve got this GCP Canary bank demo app list. If we go to applications, now we see three more resources listed here, and that aligns with the three regions of Canary bank. And if we go into the details again, here’s the display window that shows who has access.
And in this case, we have some individuals, myself included and a domain. I mean, normally you would not have individuals listed in your, your, your access list, but for demo purposes, this is what we do. So let me again, hit the edit button and remove my entry from this list. and we’ll hit save, and it does take a few moments for this to propagate through the GCP layer.
So while that will stall and talk about other things here for example, you might note that within. The Canary, excuse me. Within the Canary bank application resources, two of them start with K eight. So they’re, they’re running in Kubernetes. And the first one is using an app engine and it’s all deployed behind the Google identity aware proxy.
So even though we’re just, you know, we’re working with a single platform here, we’re using multiple technologies or capabilities behind the scenes. so ultimately when we get back to Canary bank here and, and refresh this. , you know, at some point we’ll get an error message for, for the UK region. So just you know, another way to show how we can work across, you know, OPA based systems apps deploying on GCP, Azure and Amazon.
And one thing I’ll point out is because we’re using Hexa and IDQL. I can manage these environments, even though I don’t know how to change the configuration natively on GCP. Mm-hmm I don’t know how to write rego code for an op of a system, but I can manage IDQL through this interface. And that’s a tremendously powerful aspect.
So by now, yep. Here enough, it sees my email and I no longer have access to the UK region. So that’s the end of my demo. What do you think of that, Mark?
[00:12:50] Mark Callahan: That was great. That was exactly what I know people were asking about both during our previous webinar and, and also last week when we were visiting and talking with people at Identiverse in Denver, that was, that was great to hear the interest and, and the, the need, you know, first hand from people as we spoke with them at the, the show last week.
And so, you know, as we transition a little bit off of that, so now that we’ve seen. That, how it works in action. Mike, I’d love to get a feel from you as to how we built this, you know, what were some of the considerations that went into the code and, and just the general thinking behind the Hexa software?
[00:13:26] Mike Barinek: Sure. I thought I could start with just simply showing a behind the scenes look at Gerry’s demo. So let me share. My screen folks should be able to see my development environment. And so if we think of Gerry’s demo, there’s a Docker compose file, right. That he was beginning, beginning with. And if I collapse this for a minute, you’ll see within the composed file, we have the Hexa orchestrator.
We have the admin, we also have the demo app. We have Postgres and then the OPA agent, and this is the Docker compose file that Gerry fired up in his environment and then went through the different tabs to show. In this case, it was initially the integration with OPA. So changing policy based on that bundle server, maybe one thing to call out and I’m gonna bounce back to a browser here for one second.
There’s a few terms that I think it might be nice to call out. Just to kinda, you know, warm up here. Uh oh, awesome. Yeah. In terms of the discussion. So there’s a few things really what we’re going after is this policy management point, right? So we’re orchestrating policy. There’s a couple other things going on.
The actual decision around whether an individual in this case has access or not. And then there’s also the policy enforcement. So when you’re thinking about that first demo that Gerry gave the admin in conjunction with the policy orchestrator covers the management of policy, right? Hence the orchestration of OPA is a very specialized tool for making a decision.
And so it has. An expression language associated with it is called rego. And that is used in our case to say whether or not somebody has access or not. And then it’s actually in the app, right. Where things get enforced. So within the Hexa demo or the Hexa industry’s demo is where the enforcement’s happening.
And so this diagram embodies basically Gerry’s demo as well. So I thought I’d show this and just get some terms out there for folks who might be new to policy management, policy decisions, and then policy enforcement.
[00:16:08] Mark Callahan: And this is great too. Cause I know that a question that does come up quite a bit is we talk about IDQL on Hexa is doesn’t OPA already do this? And I think what we’re seeing here is it’s, it’s a very complimentary thing’s it’s not a substitute or a replacement.
[00:16:20] Mike Barinek: That’s right. That’s right. We’re using OPA for the decision and the first example that Gerry did, and then we’re actually using Google’s identity aware proxy for the decision in the second example. And then it’s the orchestrator or the Hexa policy orchestrator. That’s pushing right or setting and getting policy against those decision points.
Okay. Let me bounce back to my IDE for a second here. Okay. So this is a bit of what our Docker file looks like and sure. there’s some configuration in here that we talk about in the readme, whether it’s the admin or the orchestrator, I guess maybe the one thing to call out to the main open source contribution is the actual orchestrator, right?
The admin is bundled for demonstration purposes as well as our Hexa industry’s demo. So when you think about the open source contribution, it’s really this orchestrator. good. So let’s look at what that looks like in the code base. So we selected go as the language tech stack to build the orchestrator and in the command directory, we have a few things notably the admin, the orchestrator, and then our demo application.
There’s a few other supporting applications in here as well, but the main ones in terms of a takeaway from this webinar are the three. And if we look at one of these Let’s dive into, I guess, the admin quickly, and then we’ll bounce over to the orchestrator. So there’s really a few things that are happening in this package or in this area of the code base.
Right? So the first thing that we’re doing is we’re calling Maine and we’re starting up the application. We then dive into this function, new application or new app, right. And we get a few different things from the environment. So in this case, the admin is talking to the orchestrator. The orchestrator is headless and it’s just a rest API where the admin is that graphical user interface.
And we’re providing a URL to the orchestrator, a key. We’re also talking about, you know, what port that we should start on. If we’re running within a Kubernetes cluster, we might be getting that port from the environment. And then we created this new app. . and so this creates the Hexa admin that Gerry showed you.
most of the kind of inner workings of the app are gonna be tucked into this package directory down here. And we’ll look at that in a second. If we look at the orchestrator, right? The same type of setup is here. So we have a main method. Right. Then we also have the newer app. Right. and then we have the app itself.
That we are creating. And again, this one is headless and it’s responding to a restful API. The interesting thing maybe to call out here is this is where some of the magic happens around adding the different providers. And so for example, we add our Google provider, right? So this is what’s interacting with Gerry’s demo.
It was the identity wear proxy and knows how to discover. Applications deployed on app engine or Kubernetes. We also added an open provider and naturally, as you could see here, we have Azure and Amazon as well. And those are the four initial providers that. We targeted in terms of putting the software together or for our, as it says here, I guess our open source release, I guess I named this one prior to being open so it was almost open.
Okay, well, let’s dive into one of these providers and take a look at how this is laid out. I guess before that I did mention there’s an admin directory. There’s also our orchestrator directory here. If folks are, you know, looking at the code base, thinking about hopefully contributing soon, you’ll see some support directories, and these are very intentional.
So the convention that we’re using here is we tack on support. And so if we wanted to emit Prometheus metrics or just talk simply about is the orchestrator healthy, right? We would bring in one of these support libraries or support packages And so we have the main two, right? Orchestrator and admin, and then a lot of supporting packages that gives us some flexibility to move around, especially in terms of maybe bringing in some other open source that would, you know, support something like our workflow or our asynchronous workflow that was actually used to discover applications.
Okay, so let’s go back to our provider. So if I dive into our Google provider here and I could collapse this for a bit so sure we have the type tier, right. and really what we provided here is just an HTTP client. So this is what’s gonna reach out to Google’s cloud API to ask about different applications and then also to set and get policy.
So it’s really these three that make up the provider interface. So discover applications. Policy info and then set policy info. The integration info is that key file that Gerry uploaded. and so in the Google example, it was the project information with some credentials in the OPA example, it was really just where to find the configuration that OPA looks for so that we could change.
In our case, we’re modifying some of the rego and data, if you’ve learned a little bit about OPA in the past. So if we look at the actual interface, right by diving into the orchestrator, right. We should see those same methods, which are right here. The organization is such that the orchestrator really doesn’t know about the different providers that are getting plugged into the orchestrator, other than that one bid at startup.
When we are here in our original command, everything else is tucked into this provider package for the moment. So you see all of these. And then if I were to bounce back to my Google provider and navigate to it, you’ll see that it’s sitting right here. so that’s a bit of the provider set up from the code view. Let me show you what it looks like in a diagram.
[00:23:48] Eli Friedman: Mike while you’re getting that pulled up as a contributor. If I wanted to add another provider to confirm, I would be implementing that interface that you just showed.
[00:23:59] Mike Barinek: That’s correct. So those three methods: Discover Set policy, Get policy, and we’ve really tried to keep things simple and not expand, you know, that interface really keeping it to kind of the bare bones.
okay. So we were looking. This was the readme on the GitHub repo. The readme also has at the top, we’ve added a bit of an architecture diagram to describe the provider interface. So if I bounce over here, The left hand side is you can think of that as the admin, right? The UI that Gerry had shown where he was changing policy.
And in this case, maybe it’s worth calling out again. we have the Hexa orchestrator and then IDQL is the policy format, right? The orchestrator accepts, and those are under the same Hexa organ or the Hexa org GitHub repo, but two separate projects may just call the orchestrator, as I mentioned, accepts input via this rest API.
And so the same, I would say methods, right? Look fairly similar. So there’s the provider integration. So this is the uploading of that key file that Gerry shared there’s then managing resources, which is really discovered. Um whether it’s apps, databases, right? Resources is the generic term we’ve been using there.
And then the set get policy is really what we talk about when we say manage policies and the provider interface again, depicted here, illustrated here. and then each of the different packages. So we have Azure, Amazon GCP, and then really any provider And again, we’re hoping, you know, getting this into the open source, getting it into the CNCF will really kind of spark, you know, some of the community to start adding their own provider.
Uh mm-hmm and implementing our interface. And again, the ones that we have so far. Azure, Amazon GCP and I guess OPA as well as here. So that’s a look at the provider interface for folks who’ve, I guess, spent time in cloud native the cloud native space. There was a project called Bosch and it had a cloud provider interface as well.
And this was roughly modeled after that open source or that plugable aspect or that provider interface. So, that is the provider. And let me now bounce back or the provider interface. I’m gonna jump back to the IDE here for a second. And so, yeah, I guess to Eli’s point yeah. What it would look like, you know, to create a new provider.
So I might have a new provider. and then it’s really just the couple things that I would implement. Right. So
Right. I’d have a new provider that I’d kick off my test. Right. I’d isolate tests by adding right underscore tests to the package. And then I’m off to the races in terms of the provider. And then more often than not there’s, you know, the client right associated with that. So there’d be a new HTTP client.
Um that’s tucked in here as well. So, for Azure, you might even get one, right? That’s pulled with their SDK, or you could write one on your own, but that’s what it would look like creating a new provider. And I think pretty soon here we have one code base or one repo. I should say I think there’s a natural E evolution of these providers making it into their own repro.
And the hope long term is that some of the folks listed here would actually then manage that provider implementation long term.
[00:28:16] Mark Callahan: I’m glad you mentioned that. We actually had a question from Aaron in the audience who was asking: what, what do we see the cloud platforms needing to own, do they have to own this? Did you have to work with them to create these providers?
[00:28:28] Mike Barinek: We didn’t, we actually you know, the crew that we’re using here, so Google, Amazon, Azure had really good documentation. And, you know, they have for in the Google example, the identity or proxy that we were able to just integrate with. So there was nothing, you know, Maybe beyond just the public APIs that we’re using here.
And that, that was maybe one of our guiding principles in terms of putting Hexa together. the orchestrator that we wanted to really tailor to those public APIs or public rest APIs. so that’s the provider interface, I guess.
[00:29:15] Gerry Gebel: Yeah. Mike, if I could ask a question, there. Yeah. Mike, if I could jump in there too.
I think that that’s what you just described as something that’s very different from some of the standards efforts we’ve been involved in, in the past, in that, in the past with something like SAML, we had to wait for product managers to add that spec to their backlogs and build the functionality in.
But since now we have these public APIs available to us. We were able to bring IDQL and Hexa to these environments rather than waiting for them to consume them. So, that was a huge, you know benefit to us for sure. .
[00:29:49] Mike Barinek: Yep.
[00:29:50] Neil Danilowicz: And it’s also a big benefit to even at the, at the different layers through the stack, right. Network providers and vendors now that are starting to do all this orchestration through their own APIs, whether they’re public or whether they’re proprietary. Now I have a manner of going in here and actually. Working to provide this layer of extraction to allow that part of the policy to be done through this orchestrator.
So it gives you that commonality, not just from a cloud service provider, but even from the network stack to say, Hey, I want to control this device, but I want to do it through policy. And I want to use a lot of the same constructs that are done, you know, for all the other types of policy.
[00:30:34] Mark Callahan: Awesome point. Cause again, we talked about this, not just being in east west, across cloud platforms, but actually up and down the tech stack that north south access as well as we look at that, that common policy, both directions.
[00:30:48] Eli Friedman: Changing topics a little bit. Another thing caught my eye here, Mike the, I believe it was in the command directory. There were some I think. Yeah, the demo smoke. Can you, can you speak to that a little bit and kind of how it reflects the, the test strategy of the, of the project?
[00:31:10] Mike Barinek: Yeah, I guess over the years we’ve gone back and forth on testing, different approaches to testing.
I think I’ve landed in a world where you have, you know, really fast tests, fast tests, and then you might have a little bit slower tests in this case. All of our tests are lightning fast, which is nice. I’m gonna actually have a hard time going back to languages other than go, because that was our test suite that just ran and It really moves.
Right. It’s pretty interesting. And the test suite is actually in our case, firing up the applications, as well as talking to in our case Postgres directly, right. Where we’re storing some of the credentials and the smoke test is one that I would say I don’t run as often. Right. So it’s not, you know, the command.
Right that I hit every few seconds or, you know, continue to hit while I’m developing this one. We’ve marked it as an integration test. And so at the top, right, we’ve given the go command, right? The test command, a flag that says, Hey, run this, you know, every so often when you see that flag or when you see the flag.
And I guess the interesting bit here is this is Gerry’s demo. So, if you think about what we’re doing we’re making some of the different commands and we’re truly making them in a sense we’re passing in the environment variable. Right? So this would be, you know, testing outside of the code base.
This file could potentially live somewhere else. but what we’re doing here right, is firing up the world that we need. So if you think of the demo app, this is the configuration for OPA or even the orchestrator itself. we start all of those and we start then asserting against the URLs. So if you remember at Gerry’s demo, it said, great news.
You’re able to access this page or, sorry. You’re not able to access accounting or human resources. And I guess it’s a little bit of the opposite, right? So we go in and we update policy and in the test we actually give the sales crew access to accounting, and then we wait for our decision ending to actually load its config.
And then we have access to accounting. The accounting tab. And we then also update with an erroneous policy to make sure, you know, we handle that case as well. but this test is really nice. Smoke test you know, different words for this test it embodies, or it. You know, mirrors Gerry’s demo exactly. And this is what gets run and gives us the confidence to say, okay, you know, everything’s working and we haven’t really or we haven’t broken anything. Right. So let’s yeah. Continue to push on.
[00:34:12] Gerry Gebel: That makes my life better for sure, Mike, because I might rebuild my environment at any point in time and just fire up a demo.
And so I’ve got a pretty high degree of confidence that it’s always going to work. So yeah, it definitely makes my life better.
[00:34:26] Mike Barinek: Yeah. And if you think of that testing pyramid you know, the reason this would be considered a slow test is cuz I’m actually waiting and different areas will actually pull to see if the right config got set.
But for the moment I was just waiting for it to refresh the config there. But I guess testing’s fairly important for us. You know, we’ll tend to catch pretty high coverage if I run with coverage as well. And not that you know, it’s, it’s a little bit, top of mind, I would say. there’s definitely areas that, you know, we don’t hit a hundred percent, but I think that’s fairly natural.
You know, some of the core pieces we try to hit for sure. Some of our supporting libraries, we try to hit you know, a high coverage. But it’s nice. It gives us a lot of confidence to move forward fast. Right. And then just continue that forever. Perfect.
So, I guess I was gonna grab Hunter, who’s sitting in front of me. Hunter and I paired on a lot of the code base here.
I guess one thing, maybe, I don’t know if it makes sense to keep the IDE up or not. You could hear me. I was hoping maybe you talk a little bit about infrastructure. The one thing that maybe caught myself and Hunter off guard a little bit when you’re working with, you know, something that sits on the kind of platform side of the house or the infra side of the house, There’s a lot going on there.
And I thought maybe Hunter could just say a few words. What he discovered.
[00:36:18] Hunter Gilane: Yeah, it was, it was yeah, maybe. Yeah, it was, it was an interesting exercise for sure. And the, I guess if you kind of imagine the, the demo, the Canary bank world given that We are, we’re kind of living on GCP, Azure AWS.
Right. And, and we’re the orchestrator is actually orchestrating policy kind of out in the, in the real world through all those public APIs and whatnot. I think on a, on a, on a standard. You know, software project, you’d commit some code, push it and it would just get deployed maybe one place, right?
[00:37:08] Mike Barinek: Yeah. To like one environment. Yeah. Like you pick, you know, most companies would pick and say, yep, we’re deploying to Google or we’re deploying to another, you know, cloud stack.
[00:37:18] Hunter Gilane: Yeah. And, and what, what got really interesting here was, was not only. Deploying a gap to multiple environments, but for each you know, a cloud platform needing to actually deploy it to, you know, end number of options, end number of like deployment options for that given platform.
Yeah. So on GCP you’re, you’re not just deploying it to, you know, whatever you know, cloud run or, or, or GKE or. You know, to some container you’re deploying to all of ’em, right? Like what are all the options for each platform? So, the kind of pipeline that kind of yeah. The build pipeline got, yeah, it got fairly interesting.
[00:38:02] Mike Barinek: A little bit gnarly. Yeah.
[00:38:04] Hunter Gilane: In fact, there’s, there’s a whole. Kubernetes environment deployed just yeah. To kind of manage. Yeah. Yeah. you know, images and pipelines and deployments and all that.
[00:38:18] Mike Barinek: Yeah. I think at some point we tried to settle on Kubernetes just that gave us a bit of an extraction on top of each of the cloud providers.
But even then, You know, we found on, you know, within some of the environments, not only, you know, are there different places to deploy, right. There’s also, as you know, we’ll talk about right, going a bit south in terms of networking and you know, that area to manage policy, but also with even something as simple as app engine, um or Kubernetes there’s different levels at which you can manage policy as well, right.
At the resource or at the resource group level. So yeah, the orchestrator became necessary to orchestrate also, you know, the deployment across, you know, the different cloud providers and the different environments, but okay, well, thanks Hunter. I, yeah. Cool. I thought I wrote you at least a little bit here.
[00:39:13] Mark Callahan: Our guest speakers.
[00:39:15] Mike Barinek: Yeah. Yeah.
[00:39:16] Mark Callahan: That’s great. And that’s why we’re here to get you get it from you all directly and, and Neil, to, to that point would love to get a little bit of your thoughts on, as we thought about that, you know, at the, the network and, and other layers.
[00:39:27] Neil Danilowicz: Yeah. Well, that, that’s kind where, where I came into the project was to really discuss just identity and, and this whole zero trust, right? We, we all know that it’s users and applications, and we wanna make sure that users can get to the right applications and can, they can see the right data. But at the same time, these users have to go across services and they have to go across networks and we need to make sure that it’s the right users using the right service.
Getting to the right application. And so now we have to work at it up and down the stack. And previously, as we saw, it was completely disjointed. What you did from an IDP perspective at the application layer, for the most part, did not solve what the network or the infrastructure needed to do at that layer.
And what we needed was a way to extrapolate that right. To allow. Some sort of quote provider or some sort of interface that now took the policy in simple words, which is, I want Neil to be able to get onto this network device and have these permissions to work just as if Neil was getting into an application and needed to see finance versus sales versus accounting.
I need to have that ability and I need to be able to do it in a way that. You know, someone who’s just setting policy and not knowing about the individual technologies can actually do that. Cuz if Neil want to do something. I don’t need to know how to configure it. I don’t want that at that level. I wanna be able to extrapolate that, but I want to be able to use those simple building blocks.
And when we started the work in IDQL and started to figure out all of these commonalities. That’s when we had this aha moment that not only can we do this east west, right. GCP versus Amazon, Facebook versus Twitter. No, I can do it network, you know, I can do it Cisco versus Versa I can do it. You know, whether it’s an IP based service or whether it’s a non IP based service. Right. I can do it all in these different sets.
And I don’t have to worry about the provider. I don’t have to worry about, is it, you know, like Verizon or is it AT&T no, I can work this all together, put it all together and make it one common set. And I think that’s where the power comes in on this platform is that I can do it east west and north and south, as long as I have the right development to build in those providers and build in those translators that we talked about, right? Because clearly you wanna do it in one place and then it gets pushed out. It gets pushed out across all platforms, but also gets pushed north and south for all of the different policy thoughts that you want to talk about.
[00:42:11] Mark Callahan: Well, and actually that’s great timing. One of the questions just came through is so what happens when policy changes at the CSP level? I don’t know what’s going on.
[00:42:20] Gerry Gebel: Yeah. Yeah. That’s something that we’ve been thinking about quite a bit. Mark within the working group is, you know, what, what is the authoritative source or what is the source of truth?
Is it, you know, the IDQL on Hexa system? Is it the CSP or is it somewhere else? Is it on GitHub? For example? So. We acknowledge that there’s a number of different ways to approach that. And, you know, organizations are going to have their own management style or procedures. we will, at some point here, put up an example, workflow or an example, policy, life cycle where we’ll show you know, where, where we will choose as a source of truth.
And I think we’ve been talking about using GitHub for that. so the IDQL well, repo will actually be on GI and that’ll, you know, so if there’s a change on the CSP, then we will, we will go back in and update that to what we view as a source of truth. But that’s something that’s going to be implementation dependent on a number of different factors. And just like Mike and Hunter were talking about all the different options on a single cloud platform, there are different ways that people approach that sort of change control DevOps process.
So we’ll have one as an example, just like we have these example demonstration applications and admin UI. They won’t be the only way that it can be accomplished, but will show some ideas about that.
[00:43:48] Mark Callahan: Awesome. Awesome.
Well, we’re just about out of time, but the good news is we were able to answer, I think, all the questions that we received during the course of this, you know, Mike is, as we think through any, any closing thoughts or anything that you wanted to have in summary, I wanted to share some resources on screen real fast.
But is there anything else, Mike, you know, from the code perspective that you’d wanna share?
[00:44:12] Mike Barinek: Mainly just, you know, if you think of the demo that Gerry gave, you know, the GCP, as well as the OPA demo, hopefully you saw the transition from his demo to the software, right? So whether that’s the Docker composed file or how some of the components are laid out. and then hopefully, you know, you gotta look at the provider interface that we have, and we’re hoping, you know, folks start jumping in here and contributing. and then a little bit of a view on, you know, how we think about testing and how we, you know, have the confidence to move forward fast forever.
So, yeah. GitHub repo, a lot of info there for folks as well. Go ahead. Yeah. Oh
[00:44:59] Mark Callahan: yeah. No, no. And just Twitter as well. Yeah. It starts there. Go download the software. We want to make sure that people have a chance to start working with it. We also invite people who are interested to join the working group.
We know there was a lot of interest in setting the course for Hexa moving forward. And you’ll see the link here on how to get involved. And then finally follow us on social media. Haxa has its own Twitter account, and that’s where you’re gonna learn about developments and, and upcoming events that we have and whatnot.
So with that, I’d like to, again, thank you all for joining us. This is a cool conversation. I had a lot of fun. Really, I mean, I’ve been a part of this the whole time, but it was great just having a summary of this and being able to share that, you know, all the work that went into this with our audience. So thank you to our presenters for joining us today and, and our audience as well.
Any final questions that you might have, please send them our way via info Hexa orchestration.org. We’ll be sure to get those answers and email back to you. A reminder that the recording of this session will also be available shortly after. And with that, I wanna wish everyone a great day. Thank you.
[00:46:03] Neil Danilowicz: Thank you.
[00:46:04] Gerry Gebel Thank you, bye now.
[00:46:05] Mike Barinek: Thanks all.
Protect your sensitive data in the cloud
We’re in a multi-cloud world that presents new opportunities as well as new security risks to your sensitive data. Consistent identities and policies are the key to protecting your cloud-resident data. And that’s where Strata’s Identity Orchestration platform comes into play.
Connect with an Identity Orchestration expert