>centralized router(s)
When using Tailscale, your packets may be sent through centralized routers, FYI.
Literally every single word of it
Buzzwords can still be technically accurate and you seem to be ignoring that when it keeps being confronted. “But it is” doesn’t matter when it comes to “but it sounds like”.
Let me give you an example; if I was walk into a store, do you think I want to talk to the person who talks about the “bidirectional optoelectromechanical document transcription and reproduction apparatus implementing discrete photonic acquisition and microdeposition techniques for bidimensional substrate encoding”, or do I want to talk to the person who will sell me a “photocopier”?
"This software let's you set up peer to peer networking between your devices, expose them to the internet, run applications on them, view useful runtime information, and integrate easily with cloud providers and infrastructure you're familiar with."
“Octelium is a full-featured access control platform, which provides API gateways and/or VPN tunnels to your HTTP services, paired with an intuitive user, policy, and auditing backplane and policy-as-code.”
Something like the above would be much more enticing to potential users including myself. I can get a rough idea of what I can actually use it for and how it can be integrated into my existing stack—and if there are more features I’ll be pleasantly surprised when I read the docs!
For example, many organizations use a mix of gated HTTP over public internet AND VPN, each one will have its own vendor auth product(s), user whitelisting, it's difficult to control or regularly audit. Octelium centralizes this management and gives admins the flexibility to control how services are exposed and to whom, presumably via simple policy change git commits. SOC2, etc. then becomes a breeze to export the state of the world, onboard/offboard employees, etc.
Defining the product in terms of use cases/problems/solutions rather that competing alternatives (Tailscale, Okta, ORY Hydra, etc.) will go a long way to increase clarity.
The problem isn't that you need to “make it easier to understand for business people” (which many here would take as an offense), the problem is that you're name dropping technologies and concepts without articulating exactly what problem your product solves, and what your exact value proposition is.
Something that does everything usually does nothing well, or at least doesn't provide a coherent dev experience with a sane mental model.
API gateway, MCP, Oauth, VPN --> not buzzwords
The defining characteristics of buzzword are that is very broad, promises "pie-in-the-sky", and almost universally under-delivered by every vendor while incurring very steep costs. In other words, the reason "zero-trust" scares people is because they have probably been burned N times but Oracle, Okta, etc. etc. incurring large costs to achieve underwhelming/non-functioning results, often times paying $$$ to solve imagined infinity-scale problems that don't even apply to the current org size, or even 10x the size.
API gateways, MCP, VPNs are tangible things that fill fairly mundane roles, it is not hard to envision how they can be used to solve real-world properties. I can easily envision dropping an "API gateway" in front of "MCP" in my stack. ZTNA however I cannot just sprinkle on my stack as if it were magic pixie dust...
It doesn't mean that ZTNA should be outright banned everywhere, but when you do use it, you need very careful to define an exact meaning expressed in terms of non-buzzword components.
- "remote access" : https://www.pomerium.com/docs/capabilities/kubernetes-access
- "access control" https://www.pomerium.com/docs/capabilities/authorization
- "visibility and auditing" : https://www.pomerium.com/docs/capabilities/audit-logs
- "user and identtiy management" https://www.pomerium.com/docs/capabilities/authentication to which I'd add device identity as well.
- "centralized policy management": https://www.pomerium.com/docs/capabilities/authorization & https://www.pomerium.com/docs/internals/ppl
- deployments using Ingress Controller or GatewayAPI https://www.pomerium.com/docs/deploy/k8s/ingress, https://www.pomerium.com/docs/deploy/k8s/gateway-api
- "for an arbitrary number of resources" not sure what to link to but there's no limit here
Congrats on the release. I saw your thread on MCP and completely agree with the approach. Happy to trade notes :)
> Funnily enough, Octelium started as a sidecar ext_authz svc for Envoy instances to operate as an IaP but I ended up creating my own Golang-based IaP, Vigil, from scratch because Envoy was just nothing but pain outside HTTP-based resources.
That's really funny... we went the opposite direction as the original versions were based on a custom Go proxy. Of course there are tradeoffs either way. Envoy is blazing fast, and does great with HTTP naturally, but has a giant configuration surface area (both pro and con), but we are now having to write some pretty low level filters /protocol capabilities in envoy for the other protocols we support (SSH, MCP, and so on) in C++ which does not spark joy. So I totally feel what you are saying.
Thanks for the kind words, though I am one of the contributors my colleague did the heavy lifting on the WebAuthN side.
Genuinely happy to see the release and where you are headed on the AI/MCP side. If you (or others) are interested, I am trying to bring more light to this model in the spec if you (or others) would like to weigh in: https://github.com/modelcontextprotocol/modelcontextprotocol...
Thus, those people coming across your project may quickly overlook it instead of giving it a chance which is disappointing.
By contrast, here is Tailscales tagline: "Fast, seamless device connectivity — no hardware, no firewall rules, no wasted time."
That kind of tells even a non-technical user what it is for even if it dumbs down all it can do. That user then doesn't need to know any technical jargon or how it works under the hood or even what wireguard is at all. The tagline is what prompts them to install and try it out and from there the UX is the deciding factor in whether they keep using it or not.
I understand it may not be easy to narrow down the explanation, especially if you invested a lot of time and don't want to do a disservice to yourself by underselling it. Looking at the Tailscale tagline I quoted, it is small and ambiguous enough that it works marketing wise, regardless of all the features and solutions they offer. But it was just an example, I should maybe have used a totally different example of a product that is not in the same realm as yours.
The explanation you gave to me here is good but only because I vaguely know what all this jargon means. Try to think of a short simple sentence that a non-expert could understand.
I like where you are going with the graphics in the readme; I'd spend some effort on creating "intended usecase" scenarios, scenarios that highlight situations where the project is the perfect fit. Using a few of these to highlight very different applications give people a good mental map of where this project would fit well for them.
"John is looking for a way to provide access to an internal tool to work-from-home colleagues. This isn't simple to do because [...]. Octelium is a good fit because [...]. Here is how John would set it up: [...]"
I share some (very little) from some of the criticism regarding the clarity, but I disagree you need a tagine like Tailscale while your solution does several times more things.
Great product, im chewing through the docs already :)
Rather one could argue they are technical jargon? But then if the technical jargon is over someone's head, maybe they are not the target audience.
I understood most of it, but it is quite dense for the first paragraph.
I don't trust this.
That said, you are also including some buzzwords on your homepage that appeal to Hacker News folks, like “self-hosted”. That will get a blank stare from enterprise folks.
So I think you should pick one audience or the other. Tailscale took the strategy of appealing to Hacker News types and then shifting up market from there. My company appeals directly to the biggest enterprises we can find and the difference is stark.
I think you’ll get less negative feedback if you choose one of these target audiences and focus on them exclusively.
edit: by the way, Octelium looks awesome, well done!
Even the diagrams for API vs AI gateways are almost identical.
I am also working on extending the process of modifying HTTP requests/body content beside what's been provided (see more https://octelium.com/docs/octelium/latest/management/core/se...). For now, Envoy's ext_proc support is coming, and I might also work on support for proxy-wasm if there is interest in it.
Last year I wrote an article about how to write a good README:
It’s way too long and excessively prescriptive, and the author goes too far with inserting their opinions (ie, don’t use github, don’t use discord etc.). I couldn’t possibly recommend this howto.
I think this is the part where I’m supposed to tell third parties to disregard yours, but I’m not going to.
It's clearer because, instead of starting with a massive list of everything you could do with Octelium (which is indeed confusing), it starts by explaining the core primitives Octelium is built on, and builds up from there.
And it actually looks pretty cool and useful! From what I can tell, the core funtionality is:
- A VPN-like gateway that understands higher-level protocols, like HTTP or PostgreSQL, and can make fine-grained security decisions using the content of those protocols
- A cluster configuration layer on top of Kubernetes
And these two things combine to make, basically, a personal cloud. So, like any of the big cloud platforms, it does a million things and it's hard to figure out which ones you need at first. But it seems like the kind of system that could be used for a homelab, a small company that wants to keep cloud costs down, or a custom PaaS selling cloud functionality. Neat!
All the streaming services are enshittifying, even the smaller ones. And other smaller webshops are enshittifying the same way that Amazon does. Like Cory Doctorow described, there's a few big webshops in the Netherlands like bol.com and coolblue.com and they are now also allowing third party sellers, often even from China. The webshops are absolved of all responsibility but they do cash out on every transaction.
A lot of employees at successful startups & FAANG make most of their money from the stock, no? And they need to buy houses and send their kids to fancy schools too, no? So sure, we can reduce it to stock holders, but I’d bet dollars to donuts the 90% of employees who aren’t posting on hn are at least passively ok with “improving metrics”, and some ambitious ones are driving the enshittification initiatives hard.
Personally I'd be much happier with a stable income with not much upward mobility but also not much risk of falling downwards. Which is what Europe is geared more towards. I don't constantly want to be in a race. Just to live my life.
If they employees want it, fine but don't be surprised if we customers start finding alternatives. And/or pirating their content (e.g. when it comes to streaming services).
But yeah American companies aren't there to support the employees. The only one they answer to are the owners or large shareholders (whichevery applies), and their only goal is to make those richer. Customers and employees alike are nothing but consumables, a raw resource you only treat right if you can't avoid it.
No other industry operates with such a blurred distinction between employees and owners. Well, save for the gig economy, itself a tumor on American-style big tech.
Remember, revenue must always increase, and must always increase faster than the year before (and this is more important than keeping the company alive), so companies always try increasingly desperate measures. Right now they are nowhere near the point of that particular measure. But they will be in the future.
Not all are commercial (but why would you want that anyway). But ZeroTier is another one like that. Basically the same thing.
EDIT: yep, referenced in your link! They have a very clear page[0] describing what is and isn’t open source.
The README is way too verbose though. It should explain the project at a glance and have links to docs for the details.
https://tailscale.com/opensource: "The core client code for the Tailscale daemon used across all platforms is open source, and the full client code is open source for platforms that are also open source."
As the content at the link makes crystal clear, the client is open source. Additionally, Tailscale's GUI for that client is open source on open source platforms.
99.999% of users of closed source OSs aren't going to care that the GUI isn't open source. The remaining 0.001% just use the open source client without the GUI.
They may or may not care, but just so we’re all on the same page, there isn’t an open source version of the end user application software for closed source operating systems that I can find on that page or any other Tailscale page. If one exists, I am happy to be corrected by you, and I am giving you the opportunity to do so now.
Gatekeeping much?
> If (1) you personally need a GUI and (2) have a religious objection to running closed software on your closed source OS, ask a more technical friend to set up Cattail if you're on Windows, or an Automation that drives the CLI if you're on macOS.
I’m just asking if Tailscale has an open source application. Users who know what a GUI is and why it’s also important that it is open source will care that the GUI isn’t open source. The ones who don’t, won’t. Who benefits from obscurantist-posting like you are doing? Probably not the folks who don’t know that Tailscale doesn’t have an open source app, because the GUI is part and parcel to what the app even is conceptually to the sorts of users who don’t know what GUIs are.
If you’re launching a business, I would suggest making sure the business looks legitimate; if it’s a pet project, trying to make yourself sound like a big business and then not having the footprint gives off “fake”/scam/caution vibes. If you’re a solo dev, drop all the fake business stuff and get rid of the buzz words and “it can do everything” marketing and focus on what it excels at as an open source project.
People are going to be skeptical (rightfully) that a solo dev/no name company is going to suddenly drop a product that rivals those of massive companies. Either massive shortcuts were taken, or there is a high chance that it will be insecure, which is not something you want from a VPN or any of the other things it claims to do. If you’ve built on existing secure technologies, you should emphasizing them because known names that have a security history are going to build a lot more trust than a no-name product.
If a software is hard to explain the purpose of to an average person in a single sentence, you have an uphill battle. Listing more features isn’t usually going to be the answer, regardless of how accurate you’re attempting to be. “It’s a VPN! and a PaaS! and a ZTNA! And an API Gateway! and AI!” It screams “please download me” rather than “I’m here to solve a problem“, which is why I wouldn’t even bother to try it; the opposite of what any project is going for.
My intention isn’t to just be critical, but rather to point out things that are likely harming your efforts.
One might posit that you're repackaging a fOSS project from somewhere with no clear ethos.
If they don’t trust you, it’s their right, but then they should just not use the software, instead of writing this type of caustic comments. Poor form in my view.
Keep up, it looks amazing!!
Should a user call people posting their software here state actors and the such? I really don’t think it helps anyone.
Rather, they should lay out thier points, suggest how the author could change their mind. Alternatively direct their warning to others if they feel their conviction that this is malware is unshakable.
> If a software is hard to explain the purpose of to an average person in a single sentence, you have an uphill battle.
It does. If you use tailscale/cloudflare access and ngrok, the product is pretty well described. If you don't, then probably you don't need this product.
A developer/company with an opaque background that you're to trust to give access to backend systems using passwordless embedded SSH (no keys needed!).
That's a big NOPE.
(Also, even the answers OP has provided really give an AI bot vibe)
Tunneled Reverse Proxy Server with Access Control - your own self-hosted zero trust tunnel. AGPL3
For what it’s worth, the title of the post is a pretty good pitch. Leaving it at “FOSS Alternative to …” would be a step in the right direction.
https://octelium.com/docs/octelium/latest/management/guide/s... https://octelium.com/docs/octelium/latest/management/guide/s... https://octelium.com/docs/octelium/latest/management/guide/s... https://octelium.com/docs/octelium/latest/management/guide/s...
- Tinc (the OG of P2P VPN)
- Hamachi (not open though)
- ZeroTier
- Nebula (from Slack)
- Tailscale
- Netbird
I wonder why people keep building more. I know each has its own quirks and things they're better at, but the difference is really quite minimal.
One of the things I really would like is zero-trust 'lighthouses'. With current Zerotier and Tailscale, you really have to trust them because they can add nodes on your account whenever they want. I don't want that, I want fully self-hosted and for the lighthouses to just coordinate but not to be part of the network. I have to do some research to see what would be best.
What enterprises want is to move away from perimeter based security models towards the promise that Google überProxy/BeyondCorp popularized many years ago. Which has been lost in the buzzword soup. It’s very simple.
1. A clean separation between Prod, Corp, and the public internet. And the UX to hop between them as an employee is as transparent as possible. (Often times network segmentation comes with additional painful friction for engineerings.)
2. One pipe to observe, and clearly attenuate permissions as traffic/messages flows between these boundaries.
3. Strong proofing of identity for every client, as an inherit requirement.
The problem is everyone outside Google has incredibly diverse protocol ecosystems. It makes those three promises incredibly difficult to deliver on as a vendor. (I’ve evaluated many)
To build a proxy that is protocol aware, only solves half the problem. It gets you some coarse grain decision making and a good logging story.
To build a proxy that is also able to perform type-inference at the request layer, allows for a much richer authZ story. One where businesses can build an authorization layer at the proxy better than their in-house apps could even do natively. (As it turns out, having all the predicates of the request available to a policy engine is super useful).
The docs are a little verbose, the marketing maybe isn’t amazing. But this is inherently a complex problem. No one has fully solved.
Teleport was first to the market to OSS and commercialize a lot of these ideas. StrongDM also is doing really interesting work in this space. I wish Hashicorp had invested more in this space.
Disclaimer: my opinions are my own.
I think the reason is that they want to inspect the traffic in central locations, if each endpoint is doing its own you need to log there which means you can't always access it immediately.
I do use Mesh VPNs privately and love them. I love the way I have this overlay, a personal network that works everywhere. My devices all keep the same address no matter where they are.
Even with VPNs the question should be, what are we gating behind that VPN anyway. Does it actually give us the granularity of controls we want or is this all security theater. (Also what about hybrid infra, between the datacenter and cloud)
FWIW, my ideal architecture is Wireguard into Corp. (Ala CloudFlare Warp, Tailscale, etc) Corp doesn’t hold a ton of sensitive assets. Or put another way, it’s a lower trust tier.
And then using something like Teleport, Octelium, etc to reach production assets.
Admittedly no vendor product I’ve come across yet has bridged this gap nicely. The überProxy tend to focus on the application protocols they support. While the wireguard clients cares more about session control of the tunnel.
There's reliquary [2] which I host and run for me and my hacker friends based on sanctum.
If I am a huge corpo, don’t I want to have another huge corpo provide me the software with a support package to have some asssurance and not go with the open source option?
Not sure if your project solves any issue of a singular dev.
Or put another way, I hope that Kubernetes integration is an option, not a prerequisite and the only supported deployment.
In case there's already material on Octelium sans k8s that I missed, pointers would be appreciated!
It's also noteworthy to understand that managing an Octelium Cluster doesn't really require any understanding of Kubernetes or how it works, unless for very specific tasks, such as scaling up/down the k8s cluster itself and setting the Cluster TLS cert fed via a specific k8s cert. Apart from that, you're just dealing with Octelium.
i did feel uncertain from the README but once i got into the docs i was blown away. this is incredibly well abstracted and organized both in terms of the implementation and docs. to hear that you built this yourself is absolutely legendary. thank you for releasing this.
Your documentation is extremely detailed and generally excellent. It does seem to be targeted at people who have already deployed Octelium or are very familiar with ZTN-style deployments. It's quite fractally dense (you stumble over one new term, need to go to another docs page, which is as long, that has more terms you need to read about, etc.) so as you've mentioned the issue really isn't your product but likely conveying what it does in a clear manner.
If you want to get general devs and homelabbers on board with the concept or testing this out, which I imagine is a very different target to your initial versions, maybe you could prefix your GitHub readme with something like:
"Octelium is a free and open source zero trust network platform. It integrates and manages your Kubernetes (and other container orchestration) applications to provide single point of authentication for all services. Your users log in once to one authentication provider such as a managed provider like Okta or any other OAuth OpenID compatible service and then your users are automatically granted their correct access levels to all your web services, databases, SSH, VPNs and more. Log in once, access everything, self-hosted."
When reading your documentation I immediately had a number of questions that were not clearly answered. That doesn't mean to say the answer isn't in your documentation, it's just that after 15-20 minutes of reading I still didn't have a clear answer. I'm reading this from the perspective of someone very familiar with operating Kubernetes clusters at scale and dabbled quite a bit with some of the commercial ZTN offerings. Apologies if the questions below are answered in your docs, I didn't find them in the time I had.
1) Your initial setup guides go from how to install Octelium to immediately scheduling services via YAML as a direct replacement for, I assume, something like deployments on k8s. Does Octelium actually run workloads? Is it 1:1 compatible with k8s API spec? Does it just talk to k8s over the API and effectively rewrite deployment YAML and spam it to k8s? Immediately this has concerns, why do I want this? Do I trust Octelium to manage deployments for me? Replacing a vast part of the k8s stack with Octelium tooling is a big ask for even small companies to trial. There's also just straight upstream connections, why would I want to let Octelium manage workloads over just using an internal k8s service hostname so I don't have to effectively rebuild the entire application around Octelium? Does letting Octelium manage workloads impact anything else (monitoring, logging, any other deployment tools that interact with k8s - if some CI/CD pipeline updates a container image does Octelium "know" or is it out of date?). What about RBAC stuff? Namespaces? Are these 1:1 k8s compatible?
2) If I work for BigCorp I'm going to have things like compliance issues coming out of my ears, your Services store credentials in plain text which is going to be flagged immediately. No-one is going to offload SSH authentication if root SSH keys are stored in plain text in secrets somewhere. I did note there's the option to effectively write your own gRPC interfaces to handle secure secrets storage but this seems like a pretty big hurdle. You then basically say "if you're enterprise we can help here" at the bottom, but I wouldn't even test this myself on a homelab without some sort of more sane basic secret management.
3) How, specifically, does Octelium handle HTTP upstream service issues? For example, if I'm proxying my companies connections to saas.com via saas.myintranet.example.com I assume I lose the ability to apply 2fa to user accounts? It's unclear from the documentation, can I specify unique credentials per user? How would I manage those? Is the only option OAuth and scopes? What if the upstream doesn't support OAuth? It's fine if upstream services need to support OAuth to support the most seamless and secure ZTN-style experience, it's just not that clear how these things work in practice.
4) You re-use a lot of terms from k8s (cluster, service etc.) which are subtly different k8s, this could cause confusion with a lot of k8s trained admins. For example, I believe an Octelium cluster is a k8s cluster running multiple Octelium instances and a custom data plane, this different enough to what a k8s cluster is to potentially be confusing as it's a different logic level. I appreciate there are limited generic descriptive terms in the dictionary to describe these things.
5) How, technically, does some of this stuff work? I know I can browse the repo or read the extensive documentation, but it would be helpful to have some clear "k8s admin"-targeted terms like, we install X proxy that does Y, using Z certificates, services managed by Octelium get a sidecar that intercepts connections to proxy them back into the ZTN, or whatever magic it does in clear, k8s, terms. If I'm looking after something like this I would need to know it inside-out, what if some certificate stops working? If I don't know exactly where everything is how can I debug it? If I deployed this it would be absolutely mission-critical, if it breaks the entire company immediately breaks. If that happens for a couple of days, realistically it can be business-terminating.
What would be really helpful to see would be some documentation with step by step guides on going from a starting point to what an Octelium-powered endgame would look like. For example, if I'm the CTO of a business that has some remote managed k8s clusters running our public SaaS along with some tooling/control panels/etc, and then a big server in my office running some non-critical but important apps via Podman, and 15 remote users that currently connect over a Tailscale VPN into the office, and 20+ upstream external web services that provide support ticketing, email, etc. then what does this look like after I "switch" to Octelium? Before and after comprehensive diagrams, and maybe even a little before-video of someone logging in via a VPN then logging into 15 control panels vs an after-video with someone logging into an intranet and just everything magically appearing would be nice. You need to sell something of this scale to CTOs, not engineers.
Generally, however, my personal biggest flag with this would be that it requires effectively a total rebuild of all infrastructure, learning entirely new tooling, management and oversight. It's a lot to ask and it doesn't seem like it can be that gradually rolled out. I suppose you could set up a second k8s cluster, deploy Octelium on it and then slowly migrate services over, but that's going to be expensive.
Some suggestions:
1) You have built a very impressive toolkit, why not operate this as a SaaS using your toolkit (with on-prem open source extensions)? You don't seem to have an actual product, you have a swiss army knife of ZTN tools which is way more work so hats off to you there. It seems you're partitioning this with "the toolkit is public, write your own glue for free, to get a working platform, come pay us" which is perfectly fine, the issue there is your "enterprise" offerings seem to be even more toolkit options rather than an actual offering. Bundle in some actual enterprise tools (fancy log tool with a web UI that also does user management and connection monitoring as a single dashboard?) and it would be a lot more appealing.
2) You require replacing industry standard technologies like directly managing k8s with kubectl with your own platform and tools - replacing "the way that everyone does X" with "new tool from relatively unknown ZTN provider" is going to be tough to sell. Can you make your octeliumctl tool a kubectl plugin? Could Octelium be less control-everything and just sit on top k8s rather than seemingly replace a lot of it?
Edit: 3) Have a homelabber edition, if that's a market you want to target, that's way easier to install and works in a single docker container or something (sacrificing some of the redundancy) but offers a basic complete suite of services (dashboard, HTTP upstream proxying, VPN) with a single "docker run...".
Good luck with Octelium, it's certainly interesting and I'll keep an eye on it as it evolves.
1. Octelium Services and Namespaces are not really related or tied to Kubernetes services and namespaces. Octelium resources in general are defined in protobuf3 and compiled to Golang types, and they are stored as serialized JSON in the Postgres main data store simply as JSONB. That said, Octelium Services in specific are actually deployed on the underlying k8s cluster as k8s services/deployments. Octelium resources visually look like k8s resouces (i.e. they both have the same metadata, spec, status structure), however Octelium resources are completely independent of the underlying k8s cluster; they aren't some k8s CRDs as you might initially guess. Also Octelium has its own API server which do some kind of REST-y gRPC-based operations for the different Octelium resources to the Postgres via an intermediate component called the rscServer. As I said, Octelium and k8s resources are completely separate regardless of the visual YAML resemblance. As for managed containers, you don't really have to use it, it's an optional feature, you can deploy your own k8s services via kubectl/helm and use their hostnames as upstreams for Octelium Services to be protected like any other upstream. Managed containers are meant to automate the entire process of spinning up containers/scaling up and down and eventually cleaning up the underlying k8s pods and deployments once you're done with the owner Octelium Service.
2. Secret management in Octelium is by default stored in plaintext. That's a conscious and deliberate decision as declared in the docs because there isn't any one standard way to encrypt Secrets at rest. Mainline Kubernetes itself does exactly the same and provides a gRPC interface for interceptors to implement their own secret management (e.g. HashiCorp Vault, AWS KMS/GCP/Azure offerings, directly to software/hardware-based HSMs, some vault/secret management vendor that I've never heard of, etc...). There is simply no one way to do that, every company has its own regulatory rules, vendors and standards when it comes to secret management at rest. I provided a similar gRPC interface for everybody to intercept such REST operations and implement their own secret management according to their own needs and requirements.
3. Octelium has Session resources https://octelium.com/docs/octelium/latest/management/core/se... Every User can have one or more Sessions, where every Session is represented by an opaque JWT-like access token, which are used internally by the octelium/octeliumctl clients after a successful login, they are also set as a HTTPOnly cookies for BeyondCorp browser-based Sessions and they are used directly as bearer tokens by WORKLOAD Users for client-less access to HTTP-based resources. You can actually set different permissions to different Users and also set different permissions for different Sessions for the exact same User via the owner Credentials or even via your Policies. OAuth2 client credential flow is only intended for WORKLOAD Users. Human Users don't really use OAuth2 client credentials at all. They just login via OIDC/SAML via the web Portal or manually via issued authentication token which is not generally recommended for HUMAN Users. OAuth2 is meant for WORKLOAD Users to securely access HTTP-based Services without using any clients or SDKs. OAuth2 scopes are not really related to zero trust at all as mentioned in the docs. OAuth2 scopes are just an additional way for applications to further restrict their own permissions, not add new ones which are already set by your own Policies.
4. An Octelium Cluster runs on top of a k8s cluster. In a typical multi-node production k8s cluster, you should use at least one node for the control plane and another separate node for the data-plane and scale up if your needs grow. A knowledge with Kubernetes isn't really required to manage an Octelium Cluster. As I mentioned above, Octelium resources and k8s resources are completely separate. One exception when it comes to directly having to deal with the underlying k8s cluster, is setting/updating the Cluster TLS certificate, such cert needs to be fed to the Octelium Cluster via a specific k8s secret as mentioned in the docs. Apart from that, you don't really need to understand anything about the underlying k8s cluster.
5. To explain Octelium more technically from a control plane/data plane perspective: Octelium does with identity-aware proxies similarly to what Kubernetes itself does with containers. It simply builds a whole control plane around the identity-aware proxies which themselves represent and implement the Octelium Services, and automatically deploys them whenever you create an Octelium Service via `octeliumtl apply` commands, scales them up and down whenever you change the Octelium Service replicas and eventually cleans the up whenever you delete the Octelium Service. As I said it's similar to what k8s itself does with containers where your interactions with the k8s API server whether via kubectl or programmatically is enough for the k8s cluster to spin up the containers, do all the CRI/CNI work for you automatically without even having to know or care which node actually runs a specific container.
As for the suggestions:
1. I am not really interested in SaaS offerings myself and you can clearly see that the architecture is meant for self-hosting as opposed to having a separate cloud-based control plane or whatever. I do, however, offer both free and professional support for companies that need to self-host Octelium on their own infrastructure according to their own needs.
2. I am not sure I understand this one. But if I understood you correctly, then as I said above, Octelium resources are completely different and separate from k8s resources regardless of the visual YAML resources. Octelium has its own API server, rscServer, data store, and it does not use CRDs or mess with k8s data store whether it be etcd or something else.
Pick one or two descriptive phrases per subject. If you need more sentences, that's okay.
For example:
> Octelium, as described more in detail in the repo's README, is simply an open source, self-hosted, unified platform for zero trust resource access that is primarily meant to be a modern alternative to corporate VPNs and remote access tools.
->
"At its core, Octelium is a modern alternative to VPNs. Unlike traditional VPNs, it assumes a zero trust security model. Octelium is open source and built to be self-hosted. The README describes many other use cases and features."
Im going to run this on my k8s, congratz for now.
kosolam•1d ago
geoctl•1d ago
alienbaby•1d ago
I scanned the github, and your reply above, and I still don't really get it.
I imagine I would understand it better if I was more fluent in the vocabulary you use and understood what some of the platforms and interesting names did from the get go.
So yea, my 2p - break it down into some use cases from simple - intermediate - advanced, use more straight forward language and less platform / product names. Technical terms are fine, but try not to string a zillion of them together all in one go... it reads a bit too much like a sales pitch trying to cram in as many acronyms and cool sounding things as possible.
geoctl•1d ago
homarp•1d ago
explain simple use cases.
explain why you built it, how you use it.
explain the 'size' of it (it requires k8s so might not be for my small homelab)
compare to 'similar' offerings.
reachableceo•1d ago
I recommend using an LLM to rewrite it far more succinctly.
reachableceo•1d ago
This product? Framework? Solution? seems to be exactly what I’ve been looking for / attempting to put together for my company. We are entirely self hosted and use only FLO software.
We use Cloudron as our core PAAS and have been looking for a FLO zero trust / identity aware proxy for DB/RDP/SSH .
Happy to be your flagship customer.
We have a brand new k8s (self hosted) cluster deployed . We use wazuh as our SEIM, librenms for monitoring / alerting.
Currently we use tailscale (with magicdns disabled and we hand out our internal pi hole IP as our recursive DNS server ) (and we have an authoritative DNS for our internal corporate domain).
Charles@turnsys.com reaches me pretty quickly most days. Would love to deploy this immediately and write a testimonial etc
geoctl•1d ago
catlifeonmars•1d ago
To me this sounds like an L7 identity & access management layer for wireguard, but again I had trouble parsing the readme.
geoctl•1d ago