frontpage.
newsnewestaskshowjobs

Made with ♥ by @iamnishanth

Open Source @Github

fp.

France's homegrown open source online office suite

https://github.com/suitenumerique
241•nar001•2h ago•124 comments

British drivers over 70 to face eye tests every three years

https://www.bbc.com/news/articles/c205nxy0p31o
18•bookofjoe•17m ago•7 comments

Start all of your commands with a comma (2009)

https://rhodesmill.org/brandon/2009/commands-with-comma/
382•theblazehen•2d ago•136 comments

Hoot: Scheme on WebAssembly

https://www.spritely.institute/hoot/
67•AlexeyBrin•3h ago•13 comments

Reinforcement Learning from Human Feedback

https://arxiv.org/abs/2504.12501
43•onurkanbkrc•3h ago•2 comments

OpenCiv3: Open-source, cross-platform reimagining of Civilization III

https://openciv3.org/
750•klaussilveira•18h ago•234 comments

The Waymo World Model

https://waymo.com/blog/2026/02/the-waymo-world-model-a-new-frontier-for-autonomous-driving-simula...
1009•xnx•23h ago•571 comments

Coding agents have replaced every framework I used

https://blog.alaindichiappari.dev/p/software-engineering-is-back
115•alainrk•3h ago•127 comments

First Proof

https://arxiv.org/abs/2602.05192
15•samasblack•47m ago•9 comments

Vocal Guide – belt sing without killing yourself

https://jesperordrup.github.io/vocal-guide/
139•jesperordrup•8h ago•55 comments

Stories from 25 Years of Software Development

https://susam.net/twenty-five-years-of-computing.html
10•vinhnx•1h ago•1 comments

A Fresh Look at IBM 3270 Information Display System

https://www.rs-online.com/designspark/a-fresh-look-at-ibm-3270-information-display-system
9•rbanffy•3d ago•0 comments

Unseen Footage of Atari Battlezone Arcade Cabinet Production

https://arcadeblogger.com/2026/02/02/unseen-footage-of-atari-battlezone-cabinet-production/
94•videotopia•4d ago•23 comments

Ga68, a GNU Algol 68 Compiler

https://fosdem.org/2026/schedule/event/PEXRTN-ga68-intro/
30•matt_d•4d ago•6 comments

Making geo joins faster with H3 indexes

https://floedb.ai/blog/how-we-made-geo-joins-400-faster-with-h3-indexes
148•matheusalmeida•2d ago•40 comments

Show HN: Look Ma, No Linux: Shell, App Installer, Vi, Cc on ESP32-S3 / BreezyBox

https://github.com/valdanylchuk/breezydemo
255•isitcontent•18h ago•27 comments

Monty: A minimal, secure Python interpreter written in Rust for use by AI

https://github.com/pydantic/monty
267•dmpetrov•18h ago•142 comments

Hackers (1995) Animated Experience

https://hackers-1995.vercel.app/
534•todsacerdoti•1d ago•258 comments

Sheldon Brown's Bicycle Technical Info

https://www.sheldonbrown.com/
411•ostacke•1d ago•105 comments

What Is Ruliology?

https://writings.stephenwolfram.com/2026/01/what-is-ruliology/
58•helloplanets•4d ago•57 comments

Show HN: I spent 4 years building a UI design tool with only the features I use

https://vecti.com
354•vecti•20h ago•160 comments

Show HN: Kappal – CLI to Run Docker Compose YML on Kubernetes for Local Dev

https://github.com/sandys/kappal
10•sandGorgon•2d ago•2 comments

Show HN: If you lose your memory, how to regain access to your computer?

https://eljojo.github.io/rememory/
327•eljojo•21h ago•198 comments

An Update on Heroku

https://www.heroku.com/blog/an-update-on-heroku/
452•lstoll•1d ago•296 comments

Microsoft open-sources LiteBox, a security-focused library OS

https://github.com/microsoft/litebox
365•aktau•1d ago•192 comments

Cross-Region MSK Replication: K2K vs. MirrorMaker2

https://medium.com/lensesio/cross-region-msk-replication-a-comprehensive-performance-comparison-o...
7•andmarios•4d ago•1 comments

How to effectively write quality code with AI

https://heidenstedt.org/posts/2026/how-to-effectively-write-quality-code-with-ai/
295•i5heu•21h ago•249 comments

Dark Alley Mathematics

https://blog.szczepan.org/blog/three-points/
105•quibono•5d ago•30 comments

Female Asian Elephant Calf Born at the Smithsonian National Zoo

https://www.si.edu/newsdesk/releases/female-asian-elephant-calf-born-smithsonians-national-zoo-an...
55•gmays•13h ago•22 comments

I now assume that all ads on Apple news are scams

https://kirkville.com/i-now-assume-that-all-ads-on-apple-news-are-scams/
1108•cdrnsf•1d ago•488 comments
Open in hackernews

Even Realities Smart Glasses: G2

https://www.evenrealities.com/smart-glasses
38•gessha•2mo ago

Comments

ktallett•2mo ago
I like the idea a lot more than other implementations (although I still think the original google glass was great), but I do feel the market for these, outside of the bankers and financial industry that loves to show off with tech, is primarily dorks like myself, and dorks like myself often like to be able to fiddle. The fact I can't be trusted to install applications that I can make as I go with an easy to use api seems a mistake. I see the even hub but that seems far off considering there is no details about it.
geraldwhen•2mo ago
No open App Store is a non starter.
stavros•2mo ago
Oof, yeah. Why do I want this if I can't run code on it? Useless.
volemo•2mo ago
They have an open BLE based protocol, you can display whatever you want on the screen.

https://github.com/even-realities

ktallett•2mo ago
I am not seeing anything mentioning that via this link.
volemo•2mo ago
There is a big README panel saying:

> We have now released a demo source code to show how you can build your own application (iOS/Android) to interact with G1 glasses.

> More interfaces and features will also be open soon to give you better control of the hardware.

With a link to the demo app just below that with a detailed explanation of the protocol and currently available features.

ktallett•2mo ago
Well that was awfully blind of me! Apologies!
j-bos•2mo ago
> The fact I can't be trusted to install applications that I can make as I go

Thanks this was key in deciding whether to consider this brand at all.

KurSix•2mo ago
Google Glass at least gave devs something to play with out of the gate
boriskourt•2mo ago
I wonder how they handle the whole notification stack with iOS. Did anyone try the first one? A lot of non-apple wearables have issues with that.
throawayonthe•2mo ago
there's an ai lol
volemo•2mo ago
Isn’t AI everywhere nowadays? I won’t be surprised if there’re AI toasters and fridges.
cjs_ac•2mo ago
I really like the aesthetic of these, both the glasses themselves and the UI. However, I have the same problem with these as with smartwatches: the apps don't solve any of my problems.
amelius•2mo ago
> the apps don't solve any of my problems

Reminds me of when dumbphones were introduced and people said things like why do I need to have a phone with me all the time.

Eisenstein•2mo ago
I don't remember anyone saying that.
ChrisMarshallNY•2mo ago
In the 1980s, it was pretty common.

This is what cellphones looked like, back then: https://share.google/z3bBbfhT43EHcDYoc

Cellphones actually were quite small, during the 1990s. I used to go to Japan, and they got downright tiny during that time.

Smartphones actually increased the size (but not to 1980s scale).

chrischen•2mo ago
Real time translations is a real good use case. The problem is most implementations such as the Airpods live translate are not great.
jhugo•2mo ago
I have the G1 glasses and unfortunately the microphones are terrible, so the live translation feature barely works. Even if you sit in a quiet room and try to make conditions perfect, the accuracy of transcription is very low. If you try to use it out on the street it rarely gets even a single word correct.
chrischen•2mo ago
This is the sad reality of most if these AI products and it’s that they are just taking poor feature implementations on the hardware. It seems like if they just picked one or these features and doing it well will make the glasses useful.

Meta has a model just for isolating speech in noisy environments (the “live captioning feature”) and it seems that’s also the main feature of the Aircaps glasses. Translation is a relatively solved problem. The issue is isolating the conversation.

I’ve found meta is pretty good about not overdelivering on promised features, and as a result even though they probably have the best hardware and software stack of any glasses, the stuff you can do with the Rayban displays are extremely limited.

Xss3•2mo ago
Is it even possible to translate in real time? In many languages and sentences the meaning and translation needs to completely change all thanks to one additional word at the very end. Any accurate translation would need to either wait for the end of a sentence or correct itself after the fact.
chrischen•2mo ago
I guess the translation can always update itself in real time if the model is fast enough.
jhugo•2mo ago
Live translation is a well solved problem by this point — the translation will update as it goes, so while you may have a mistranslation visible during the sentence, it will correct when the last word is spoken. The user does need to have awareness of this but in my experience it works well.

Bear in mind that simultaneous interpretation by humans (eg with a headset at a meeting of an international organisation) has been a thing for decades.

makeitdouble•2mo ago
I've been in many situations where I wanted translations, and I can't think of one where I'd actually want to rely on either glasses or the airpods working like they do in the demos.

The crux of it for me:

- if it's not a person it will be out of sync, you'll be stopping it every 10 sec to get the translation. One could as well use their phone, it would be the same, and there's a strong chance the media is already playing from there so having the translation embedded would be an option.

- with a person, the other person needs to understand when your translation in going on, and when it's over, so they know when to get an answer or know they can go on. Having a phone in plain sight is actually great for that.

- the other person has no way to check if your translation is completely out of whack. Most of the time they have some vague understanding, even if they can't really speak. Having the translation in the glasses removes any possible control.

There are a ton of smaller points, but all in all the barrier for a translation device to become magic and just work plugged in your ear or glasses is so high I don't expect anything beating a smartphone within my lifetime.

chrischen•2mo ago
Some of your points are already considered with current implementations. Airpods live translate uses your phone to display what you say to the target person, and the target person's speech is played to your airpods. I think the main issue is that there is a massive delay and apple's translation models are inferior to ChatGPT. The other thing is the airpods don't really add much. It works the same as if you had the translation app open and both people are talking to it.

Aircaps demos show it to be pretty fast and almost real time. Meta's live captioning works really fast and is supposed to be able to pick out who is talking in a noisy environment by having you look at the person.

I think most of your issues are just a matter of the models improving themselves and running faster. I've found translations tend to not be out of whack, but this is something that can't really be solved except by having better translation models. In the case of Airpods live translate the app will show both people's text.

makeitdouble•2mo ago
It's understating the lag. Faster will always be better, but even "real time" still requires the other person to complete their sentence before getting a translation (there is the edge case of the other language having similar grammatical structure and word order, but IMHO that's rare), and you catch up from there. That's enough lag to warrant putting the whole translation process literally on the table.

I see the real improvements in the models, for IRL translation I just think phones are very good at this and improving from there will be exponentially difficult.

IMHO it's the same for "bots" intervening (commenting/reacring on exchanges etc.) in meetings. Interfacing multiple humans in the same scene is always a delicate problem.

DrewADesign•2mo ago
And I hate the aesthetics of them (for me,) which is going to be a huge problem for the smart glasses world. Glasses dramatically change how you look, so few frames look good on more than a handful of face types, and that’s not even considering differences in personal style. Unless you come up with a core that can be used in a bunch of different frame shapes I can’t see any of these being long term products.
KurSix•2mo ago
The hardware can look amazing, but if the software doesn't offer something meaningfully better than pulling out your phone, it ends up as an expensive novelty
KaiserPro•2mo ago
Nice idea, but no world lock rendering (Thats hard so we'll let them off)

However you are limited in what you can do.

there are no speakers, which they pitch as a "simpler quieter interface" which is great but it means that _all_ your interactions are visual, even if they don't need to be.

I'm also not sure about the microphone setup, if you're doing voice assistant, you need beamforming/steering.

However, the online context in "conversate" mode is quite nice. I wonder how useful it is. they hint at proper context control "we can remember your previous conversations" but thats a largely unsolved problem on large machines, let alone on device.

volemo•2mo ago
Why would you need world lock rendering on a 640x350 monochrome display? It’s a personal dashboard, not a “spatial computer”.
KaiserPro•2mo ago
maps, but also being able to move your head so that you can see past the thing on the screen is useful.

For people who are prone to motion sickness, its also really useful to have it tied to the global frame. (I don't have that, fortunately)

volemo•2mo ago
> being able to move your head so that you can see past the thing on the screen is useful

Afaiu, the dashboard is positioned above you, so you have to tilt your head up to see it and it shouldn’t obstruct anything important in regular life.

nylonstrung•2mo ago
I bought this but ultimately returned it as it didn't really solve any problems due to being a complete walled garden with very sparse functionality right now

It's a cool form factor but the built-in transcription, ai etc are not very well implemented and I cannot imagine a user viewing this as essential rather than a novelty gadget

sprash•2mo ago
> complete walled garden

Not really. You can build your own apps [1].

1.: https://github.com/even-realities/EvenDemoApp

throwuxiytayq•2mo ago
Not enough. I am not putting any hardware on my nose if I can't fully control the software running on it. Is the OS open source?
sprash•2mo ago
This is pretty low level access. The phone you are going to pair it with and is providing network access is definitely more closed than this.
bee_rider•2mo ago
It is bad that phones are closed. But, at least they aren’t stuck to our faces and able to observe our every move.
volemo•2mo ago
These glasses don’t have a camera.
bee_rider•2mo ago
Ah, my mistake, I just assumed. Well, in that case the lack of openness is less of an issue.
beeflet•2mo ago
No, it isn't
yjftsjthsd-h•2mo ago
My phone is running an open source OS that I have low-level control of.
embedding-shape•2mo ago
Seems they even included a hard-coded API key for DeepSeek's API that still work, doesn't exactly leave great confidence in their engineering.
hasperdi•2mo ago
If you share that API key here...

- we all get to use free LLM

- they'll learn to do it properly next time

So it's a win-win situation

embedding-shape•2mo ago
If I instead don't, and let you know that the key is there in the source code, hopefully at least one deserving person might learn how to look through source code better, and none of the lazy people get it :)

So no.

einpoklum•2mo ago
I've never used any smart glasses, but I do wear prescription glasses ("dumb" glasses?); don't these smart glasses products all clash with the field of prescription lenses? I mean, either they each have to provide the entire possible range of correction profiles, for use instead of what people wear now, or they need to be attachable/overlays for regular prescription glasses - which is complicated and doesn't look like what the providers are doing ATM. Or - am I getting it wrong?
ChrisMarshallNY•2mo ago
Things like Vision and Quest have the ability to adapt to vision issues, but these things likely don’t.

I guess they could use a common “generic” form factor, that would allow prescription lenses to be ordered.

That said, this is really the ideal form factor (think Dale, in Questionable Content), and all headsets probably want to be it, when they grow up.

boriskourt•2mo ago
These offer a wide-ish profile of prescription lens options within the three lens types they support. It is definitely not the full gamut though.
swiftcoder•2mo ago
I love how every player in this space is just building exactly the same product, and no of them seems to have a compelling pitch why someone would need their product
dbbk•2mo ago
Teleprompter is pretty original and compelling
swiftcoder•2mo ago
I guess. I mean... how often do you use an actual teleprompter? I'm guessing apart from politicians and newscasters, the answer is pretty much never.
basedrum•2mo ago
Anyone who did any public speaking?
swiftcoder•2mo ago
I'm not sure how many actually use teleprompters, because it regularly bothers me how many public figures are staring at their notes on the podium throughout their speeches.

Mind you, I grew up in the handful-of-index-cards-and-memorise-the-damn-speech era.

dbbk•2mo ago
That's probably because access to a real teleprompter is pretty difficult?
fsiefken•2mo ago
For me it's like the pebble in smart glasses land, simple and elegant. Less is more, just calendar, tasks, notes and AI. The rest I can do on my laptop or phone (with or without other display glasses). I do wish there's a way to use the LLM on my android phone with it and if possible write my own app for it. So I am not dependent on the internet and have my HUD/G2 as a lightweight custom made AI assistent.
qweiopqweiop•2mo ago
Am I the only one who feels hesitant to even interact with someone wearing smart glasses? I have no idea if they could be recording me.
KurSix•2mo ago
It's similar to how people felt when Google Glass first showed up. Until there's some universally understood signal like a visible recording light (that can't be turned off), I think that unease is going to stick around
qweiopqweiop•2mo ago
Even with a light, a just made a quick google search and the top reddit thread is about how easy it is to cover up with black nail paint.
volemo•2mo ago
These particular classes don’t have a camera.
jtrn•2mo ago
The only thing that matters is how easily I can customize what is shown on the screen. Everything else is probably just annoying, like the translation or map feature, which I assume will be finicky and useless. If the ring had four-way arrows and ok/back buttons, and the glasses had a proper SDK for creating basic navigation and retrieval, such as the ability to communicate with HTTP APIs, there would be no limit to the useful things I could create. However, if I'm forced to use built-in applications only, I have very little faith that it would actually be useful, considering how bad the last generation of applications for these devices was.
amelius•2mo ago
> The only thing that matters is how easily I can customize what is shown on the screen.

What matters more is how they support different eye-distances (interpupillary distance, IPD).

jtrn•2mo ago
I assumed that the display is actually watchable, as it seems to be. I have multiple AR and VR glasses, and I have never gotten fed up with them due to an inability to view the content properly. But having to struggle with getting it to show the content I want...

For instance, the teleprompter is terrible and buggy when it tries to follow along based on voice. A simple clicker for moving forward in a text file would be better than how it currently works.

How many people say they lost interest due to ocular issues versus complaints that it’s just not useful?

Seriously. A simple file browser with support for text files only would be more useful than the finicky G1 apps.

Of course visual issues could occur for someone, but it’s so aggravating that the they can’t just put in some sort of customization for content properly

Anonbrit•2mo ago
There's a Bluetooth API for the G1 that's been pretty fully reverse engineered. It wouldn't be at all difficult to wrap an http wrapper around it if somebody hasn't already. I suspect you could even vibe code it in a weekend.
spigottoday•2mo ago
Under support a number of policies are listed. The privacy policy is not one of them. No thanks.
ripped_britches•2mo ago
If you run this site, look into making your images more compressed! Takes forever to load them
beeflet•2mo ago
I don't have time to fiddle around with some locked-in ecosystem in exchange a little more productivity or the ability to pretend not to be using my computer. And I don't even have a day job.

If it was just a heads-up display for android like xreal, but low power and wireless that might be cool for when I'm driving. But everyone wants to make AI glasses locked into their own ecosystem. Everyone wants to displace the smartphone, from the Rabbit R1 to the new ray-bans. It's impossible.

In the end this tech will all get democratized and open sourced anyways, so I have to hand it to Meta and others for throwing money around and doing all this free R&D for the greater good.

hereme888•2mo ago
So we're already used to people looking crazy talking to themselves (talking via BT headphones).

Now we're going to see people's eyes moving around like crazy.

volemo•2mo ago
Would that attract your attention? I see people with nystagmus often enough to not care.
hereme888•2mo ago
It would. I don't know if you mean people reading phone screens or on drugs or what, but I rarely ever see it outside clinical settings.
volemo•2mo ago
> if you mean people reading phone screens or on drugs or what

Nope, just people in my regular life, colleagues, etc. having that little twitch in their eyes.