frontpage.
newsnewestaskshowjobs

Made with ♥ by @iamnishanth

Open Source @Github

fp.

Mathematics for Computer Science (2018) [pdf]

https://courses.csail.mit.edu/6.042/spring18/mcs.pdf
83•vismit2000•3h ago•10 comments

What Happened to WebAssembly

https://emnudge.dev/blog/what-happened-to-webassembly/
101•enz•2h ago•86 comments

How to Code Claude Code in 200 Lines of Code

https://www.mihaileric.com/The-Emperor-Has-No-Clothes/
514•nutellalover•14h ago•179 comments

Surveillance Watch – a map that shows connections between surveillance companies

https://www.surveillancewatch.io
5•kekqqq•33m ago•0 comments

Why I left iNaturalist

https://kueda.net/blog/2026/01/06/why-i-left-inat/
185•erutuon•8h ago•94 comments

European Commission issues call for evidence on open source

https://lwn.net/Articles/1053107/
90•pabs3•2h ago•35 comments

Embassy: Modern embedded framework, using Rust and async

https://github.com/embassy-rs/embassy
209•birdculture•11h ago•85 comments

Sopro TTS: A 169M model with zero-shot voice cloning that runs on the CPU

https://github.com/samuel-vitorino/sopro
235•sammyyyyyyy•13h ago•87 comments

Hacking a Casio F-91W digital watch (2023)

https://medium.com/infosec-watchtower/how-i-hacked-casio-f-91w-digital-watch-892bd519bd15
89•jollyjerry•4d ago•25 comments

Do not mistake a resilient global economy for populist success

https://www.economist.com/leaders/2026/01/08/do-not-mistake-a-resilient-global-economy-for-populi...
148•andsoitis•3h ago•146 comments

On Getting Hacked

https://ahmeto.com/post/on-getting-hacked
54•ahmetomer•3d ago•39 comments

Bose has released API docs and opened the API for its EoL SoundTouch speakers

https://arstechnica.com/gadgets/2026/01/bose-open-sources-its-soundtouch-home-theater-smart-speak...
2307•rayrey•19h ago•343 comments

Richard D. James aka Aphex Twin speaks to Tatsuya Takahashi (2017)

https://web.archive.org/web/20180719052026/http://item.warp.net/interview/aphex-twin-speaks-to-ta...
171•lelandfe•12h ago•52 comments

The Jeff Dean Facts

https://github.com/LRitzdorf/TheJeffDeanFacts
466•ravenical•21h ago•166 comments

The Unreasonable Effectiveness of the Fourier Transform

https://joshuawise.com/resources/ofdm/
227•voxadam•15h ago•93 comments

Anthropic blocks third-party use of Claude Code subscriptions

https://github.com/anomalyco/opencode/issues/7410
340•sergiotapia•6h ago•264 comments

Photographing the hidden world of slime mould

https://www.bbc.com/news/articles/c9d9409p76qo
25•1659447091•1w ago•5 comments

1M for Non-Specialists: Introduction

https://pithlessly.github.io/1ml-intro
4•birdculture•6d ago•2 comments

Samba Was Written (2003)

https://download.samba.org/pub/tridge/misc/french_cafe.txt
22•tosh•5d ago•14 comments

Mysterious Victorian-era shoes are washing up on a beach in wales

https://www.smithsonianmag.com/smart-news/hundreds-of-mysterious-victorian-era-shoes-are-washing-...
30•Brajeshwar•3d ago•11 comments

AI coding assistants are getting worse?

https://spectrum.ieee.org/ai-coding-degrades
311•voxadam•18h ago•500 comments

He was called a 'terrorist sympathizer.' Now his AI company is valued at $3B

https://sfstandard.com/2026/01/07/called-terrorist-sympathizer-now-ai-company-valued-3b/
176•newusertoday•16h ago•214 comments

The No Fakes Act has a “fingerprinting” trap that kills open source?

https://old.reddit.com/r/LocalLLaMA/comments/1q7qcux/the_no_fakes_act_has_a_fingerprinting_trap_t...
127•guerrilla•5h ago•53 comments

Grok turns off image generator for most after outcry over sexualised AI imagery

https://www.theguardian.com/technology/2026/jan/09/grok-image-generator-outcry-sexualised-ai-imagery
34•beardyw•1h ago•31 comments

Google AI Studio is now sponsoring Tailwind CSS

https://twitter.com/OfficialLoganK/status/2009339263251566902
642•qwertyforce•14h ago•209 comments

Ushikuvirus: Newly discovered virus may offer clues to the origin of eukaryotes

https://www.tus.ac.jp/en/mediarelations/archive/20251219_9539.html
97•rustoo•1d ago•22 comments

Fixing a Buffer Overflow in Unix v4 Like It's 1973

https://sigma-star.at/blog/2025/12/unix-v4-buffer-overflow/
124•vzaliva•15h ago•33 comments

Show HN: macOS menu bar app to track Claude usage in real time

https://github.com/richhickson/claudecodeusage
123•RichHickson•15h ago•43 comments

Logistics Is Dying; Or – Dude, Where's My Mail?

https://lagomor.ph/2026/01/logistics-is-dying-or-dude-wheres-my-mail/
47•ChilledTonic•8h ago•35 comments

Systematically Improving Espresso: Mathematical Modeling and Experiment (2020)

https://www.cell.com/matter/fulltext/S2590-2385(19)30410-2
30•austinallegro•6d ago•7 comments
Open in hackernews

Show HN: DeepDream for Video with Temporal Consistency

https://github.com/jeremicna/deepdream-video-pytorch
61•fruitbarrel•20h ago
I forked a PyTorch DeepDream implementation and added video support with temporal consistency. It produces smooth DeepDream videos with minimal flickering, and is highly flexible including many parameters and supports multiple pretrained image classifiers including GoogLeNet. Check out the repo for sample videos! Features:

- Optical flow warps previous hallucinations into the current frame

- Occlusion masking prevents ghosting and hallucination transfer when objects move

- Advanced parameters (layers, octaves, iterations) still work

- Works on GPU, CPU, and Apple Silicon

Comments

reactordev•19h ago
Reminds me of my first acid trip.
dudefeliciano•19h ago
looks cool! i see the classic dog faces when generating video, is it possible to use own images for the style of the output video?
DustinBrett•19h ago
Looking at that video makes me sick.
noobcoder•19h ago
I remember back in 2018 we used do FFmpeg split clips into frames, hit each with GoogLeNet gradient ascent on layers thenn blended prev frame for crude smoothing
embedding-shape•19h ago
SOTA for frame interpretation today is probably RIFE (https://github.com/hzwer/ECCV2022-RIFE) as far as I know, which is fast as hell as well, and really good results still. But it's already 4 years old now, anyone know if there is anything better than RIFE for this sort of stuff today?
echelon•19h ago
This is a trip down memory lane!

I remember when DeepDream first came out, and WaveNet not long after. I was immediately convinced this stuff was going to play a huge role in media production.

I'm a big hobbyist filmmaker. I told all of my friends who actually work in film (IATSE, SAG, etc.) and they were so skeptical. I tried to get them to make an experimental film using DeepDream.

This was about the same time Intel was dabbling in 360 degree filmmaking and just prior to Epic Games / Disney working on "The Volume".

I bought a bunch of Kinects and built a really low-fidelity real time version of what Intel was working on. The sensors are VGA resolution, so it's not at all cinematic.

When Stable Diffusion came out, I hooked up Blender to image-to-image and fed it frames of previz animations to convert to style transferred anime. Then IP Adapter. Then Animate Diff.

My friends were angry with me at this point. AI was the devil. But I kept at it.

I built an animation system for the web three years ago. Nonlinear timeline editing, camera controls, object transformations. It was really crude and a lot of work to produce simple outputs: https://storyteller.ai/

It was ridiculously hard to use. I typically film live action for the 48 Hour Film Project (twice annual film "hackathon" that I've done since I was a teenager). I used Mocap suits and 3D animation, and this is the result of 48 hours of no sleep:

https://vimeo.com/955680517/05d9fb0c4f

We won two awards for this. The audience booed us.

The image-to-video models came out right after this and immediately sunk this approach. Luma Dream Machine was so easy and looked so much better. Starting frames are just like a director and DP blocking out a scene and then calling action - it solved for the half of the problem I had ignored, which was precisely controlling for look/feel (though this abandons temporal control).

There was a lot of slop, but I admired the work some hard-working people were creating. Those "movie trailers" people were criticizing were easily 10 hours of work with the difficulty of the tech back then.

I found use in model aggregation services like OpenArt and FreePik. ComfyUI is too far removed for me - I appreciate people who can do node magic, but it's not my thing.

I've been working on ArtCraft ( https://github.com/storytold/artcraft ), which is a more artist-centered version for blocking out and precisely articulating scenes.

My friends and I have been making a lot of AI films, and it's almost replaced our photons-on-glass filmmaking output. (We've done some rotoscoped AI + live action work.)

https://www.youtube.com/watch?v=Tii9uF0nAx4 (live action rotoscoped film)

https://www.youtube.com/watch?v=v_2We_QQfPg (EbSynth sketch about The Predator)

https://www.youtube.com/watch?v=tAAiiKteM-U (Robot Chicken inspired Superman parody)

https://www.youtube.com/watch?v=oqoCWdOwr2U (JoJo grinch parody)

We're going to do a feature length film at some point, but we're still building up the toolbox.

If you're skeptical about artists using AI, you should check out Corridor Crew. They're well respected in our field, they have been for over a decade, and they love AI:

https://en.wikipedia.org/wiki/Corridor_Digital

https://www.youtube.com/watch?v=DSRrSO7QhXY

https://www.youtube.com/watch?v=GVT3WUa-48Y

https://www.youtube.com/watch?v=iq5JaG53dho

They're big ComfyUI fans. I just can't get used to it.

Real filmmakers and artists are using this tech now. If you hate AI, please know that we see this more as an exoskeleton than as a replacement. It enables us to reach the look and feel of a $100+ million dollar Pixar, Star Wars, or Marvel film without the budgets we could never have without insane luck or nepotism.

If anything, this elevates us to a place where we will one day be competing with Disney. They should fear us instead of the other way around.

xg15•18h ago
> It enables us to reach the look and feel of a $100+ million dollar Pixar, Star Wars, or Marvel film without the budgets we could never have without insane luck or nepotism.

I can understand filmmakers wanting this, but less so audiences.

The problem is that then everything will look like Marvel or Pixar or Star Wars.

The other problem is that as audience, I now can never be sure if a detail was put somewhere intentionally by the filmmakers or if it was just a random AI addition.

echelon•18h ago
> everything will look like Marvel or Pixar or Star Wars.

I want to immediately push back against that notion. I cited Disney because that's what people are familiar with.

In reality, we're going to have more diversity than ever before.

Stuff like this:

https://www.reddit.com/r/aivideos/comments/1q22s3s/go_slowly...

https://www.linkedin.com/feed/update/urn:li:activity:7409063...

https://www.youtube.com/watch?v=9hlx5Rslrzk

These are bad examples, but they're just what I have in front of me.

"I've seen things you people wouldn't believe" is a fitting quote.

There's going to be incredible stylistic diversity.

> I now can never be sure if a detail was put somewhere intentionally by the filmmakers or if it was just a random AI addition.

If it's in the film, it was intentional. What you won't know is whether it was serendipity. And the fact is quite a bit of filmmaking today is already serendipity.

manbart•18h ago
I really enjoyed "Take Out," I can't believe people booed it! I especially liked the fortune cookie scene
echelon•18h ago
Thank you so much!

It was gut-wrenching to hear after a weekend of sleepless effort. This was a community I've been a part of for decades, and I've never had my work received like that before.

But at the same time, this was right after the strikes and just as AI was looking like it might threaten jobs. I totally and completely understand these folks - most of whom are actively employed in the industry - feeling threatened and vulnerable.

In the past two years the mood has definitely lightened. More filmmakers are incorporating AI into their work and it's becoming accepted now. Of the films that used AI in last October's 48 Hour Film Project, none of them were booed, and a few of them won awards.

We animated all of the gore in ours with AI (warning - this is bloody and gory) :

https://drive.google.com/file/d/1m6eUR5V55QXA9p6SLuCM8LdFMBq...

(This link will get rate limited - sorry. An untold volume of indie films are on the "deep web" as Google Drive links.)

We really had to rush on this one. We only had 24 hours due to a scheduling snafu and had to roll with the punches. The blood didn't do exactly what we wanted, and we had to keep the script super tight. Limited location, serendipitous real party we crashed with a filmmaking request, ...

These are like hackathons and game jams.

xrd•15h ago
I have to say, I'm enthralled by your two comments and what you've shared. My favorite director is Richard Linklater, mainly because his storytelling is incredible, but at a close second is the way he has pushed the boundaries of narrative with technology like rotoscoping (and which you referenced). This is a fascinating thread. I'm definitely a fan of your work.
CyberDildonics•17h ago
you should check out Corridor Crew. They're well respected in our field,

In what field? They're youtubers, none of them have held a single job in visual effects. They make react videos. They aren't in effects, their industry is youtube.

seanw444•16h ago
And they're pretty good at what they do.
CyberDildonics•15h ago
What they do is react videos to things they don't understand, so it would be crazy to look to them as vfx leaders when they haven't even had an entry level job.

That would be like betting on someone who watches boxing every weekend to win a professional fight.

seanw444•15h ago
I don't know what their current video catalog looks like, but they used to do a lot of fairly impressive VFX videos themselves about a decade ago.
imiric•14h ago
That's flat out wrong. They have produced several web series, and their videos feature a lot of visual effects. Just because their career focuses on producing web content doesn't mean they're any less talented than someone working on feature films.

I can't comment on whether they're "well respected" in the VFX industry, but you're being misleadingly hostile.

CyberDildonics•13h ago
I can't comment on whether they're "well respected" in the VFX industry,

They aren't, because they aren't in the vfx industry.

you're being misleadingly hostile.

No, this is honesty. People who only know vfx through fake youtubers want to defend them, but it's the blind leading the blind for clicks and views.

Just because their career focuses on producing web content doesn't mean they're any less talented than someone working on feature films.

They built their channel criticizing people who work on feature films. Their work is good according to them and acolytes who buy into it, but people who think they represent vfx don't realize this and suddenly it isn't fair to point out the truth.

imiric•13h ago
> No, this is honesty.

No, it's bullshit.

From Wikipedia[1]:

> Corridor Digital LLC is an American independent production studio based in Los Angeles, known for creating pop-culture-related viral online short-form videos since 2010, as well as producing and directing the Battlefield-inspired web series Rush and the YouTube Premium series Lifeline. It has also created television commercials for various companies, including Machine Zone and Google.

You clearly have some bone to pick with them, but they're accomplished VFX artists. Whether they're good or not is a separate matter, but they're not "fake youtubers" or misleading anyone, unlike yourself.

[1]: https://en.wikipedia.org/wiki/Corridor_Digital

CyberDildonics•10h ago
Making web videos doesn't mean that they are able to do the visual effects that they criticize.

They don't make "vfx artists react" videos to low grade web series.

They call themselves vfx artists when they have never done that. They make web videos, they criticize professional work and you are completely ignoring that.

chabes•16h ago
The artist Panda Bear has a music video from this time called Crosswords. Uses deepdream about halfway through. I thought it was pretty groundbreaking when it came out a decade ago. Seems much more tame now, by today’s standards, but still gives me the feels

Edit: link https://m.youtube.com/watch?v=2EXslhx989M&pp=ygUUcGFuZGEgYmV...

RIMR•12h ago
I always thought that these models were going to revolutionize video compression. You would have something like a ~10GB compression model, and you could compress a 4K movie down to 500MB or something playable by anyone else with the model.

Maybe that's still going to happen?

krapp•11h ago
Why would Disney fear you, though?

They'll still own all of the money-making IP that your models are trained on. You won't actually be able to get away with "X in the style of Disney or Y in the style of Pixar." And they'll have their own in-house models, far more expensive and powerful (likely through regulation) than you'll be able to afford.

You aren't talking about competition, you're talking about emulation. Putting an inferior version of existing properties on the market like The Asylum does.

I watched all of the links you provided. The "live action rotoscoped film" didn't need AI. They could have rented a knight costume or did that in AE, it wouldn't have even been that expensive. It wasn't even a good example of what rotoscoping could do - and I can actually accept rotoscoping and motion capture as a legitimate use of LLMs.

The concept for the Predator sketch was lame and it's ripping off the style of an Joel Haver, an animator with actual comedy talent. That legitimately kind of makes me mad. That's exactly the sort of thing that makes people hate AI. You aren't even ripping off a corporation here, which I could at least respect in abstract.

The Superman parody was generic and derivative, but it looked competent. Didn't really look like Robot Chicken though. The way they did the mouths was weird. And the whole plot revolved around a problem that was never actually a problem in the franchise.

The Grinch "anime" didn't look like a good anime. It looked like the kind of thing people criticize when an anime studio cuts costs for a second season. Still frames and very little animation. Inconsistent and generic style.

The horror movie posted below? The "blood" looked awful a lot of the time. The cinematography, as such, didn't carry the story at all. It isn't shot or cut the way an actual movie would be. The actors weren't compelling, the script was tepid.

Understand, I'm really trying not to just shit on this because it's AI, I'm trying to approach it as art because that's what it purports to be. And I can concede that the technical capability of AI has advanced dramatically in this field. They did a Robot Chicken and they did a Joel Haver and I saw a 90s cartoon but... it's bad art. I see no sense of actual unique creative vision anywhere. No sense of an actual artist trying to express something. Nothing that tells me "this could only have been done by AI, and would have been impossible beforehand."

It's like AI people think all you need is the aesthetic, and that the aesthetic they're imitating is a suitable replacement for the actual talent that went into it, but it isn't.

kieojk•17h ago
As the name of the model suggests, the generated videos are full of dreams.