frontpage.
newsnewestaskshowjobs

Made with ♥ by @iamnishanth

Open Source @Github

fp.

Show HN: Executable Markdown files with Unix pipes

59•jedwhite•10h ago•48 comments

Show HN: macOS menu bar app to track Claude usage in real time

https://github.com/richhickson/claudecodeusage
138•RichHickson•18h ago•46 comments

Show HN: A geofence-based social network app 6 years in development

https://www.localvideoapp.com
65•Adrian-ChatLocl•16h ago•41 comments

Show HN: Commit-based code review instead of PR-based

https://commitguard.ai
8•moshetanzer•7h ago•0 comments

Show HN: DeepDream for Video with Temporal Consistency

https://github.com/jeremicna/deepdream-video-pytorch
61•fruitbarrel•23h ago•24 comments

Show HN: Ever wanted to look at yourself in Braille?

https://github.com/NishantJoshi00/dith
3•cat-whisperer•5h ago•0 comments

Show HN: A Wall Street Terminal for Everyone

https://marketterminal.com/chart
6•adamfontan•5h ago•4 comments

Show HN: I visualized the entire history of Citi Bike in the browser

https://bikemap.nyc/
109•freemanjiang•1d ago•31 comments

Show HN: I built a tool to create AI agents that live in iMessage

https://tryflux.ai/
28•danielsdk•5d ago•12 comments

Show HN: An all-in-one image crop/split/collage tool (no uploads, no watermark)

https://imagesplitter.tools
3•harperhuang•7h ago•6 comments

Show HN: I built a "Do not disturb" Device for my home office

https://apoorv.page/blogs/over-engineered-dnd
93•quacky_batak•5d ago•49 comments

Show HN: Watch LLMs play 21,000 hands of Poker

https://pokerbench.adfontes.io/run/Large_Models
29•jazarwil•23h ago•18 comments

Show HN: Image Scaler – Privacy-focused image resizing with 60-image batches

https://image-scaler.com/
2•nmczzi•8h ago•1 comments

Show HN: SMTP Tunnel – A SOCKS5 proxy disguised as email traffic to bypass DPI

https://github.com/x011/smtp-tunnel-proxy
136•lobito25•2d ago•44 comments

Show HN: Layoffstoday – Open database tracking for 10k Companies

https://layoffstoday.io/
2•doremon0902•9h ago•2 comments

Show HN: Open database of link metadata for large-scale analysis

https://github.com/rumca-js/RSS-Link-Database-2025
15•renegat0x0•5d ago•1 comments

Show HN: Claude Code for Django

https://github.com/kjnez/claude-code-django
4•cui•10h ago•2 comments

Show HN: Tailsnitch – A security auditor for Tailscale

https://github.com/Adversis/tailsnitch
277•thesubtlety•3d ago•28 comments

Show HN: Free and local browser tool for designing gear models for 3D printing

https://gears.dmtrkovalenko.dev
52•neogoose•2d ago•13 comments

Show HN: Fzf-navigator, a terminal file system navigator

https://github.com/benward2301/fzf-navigator
2•benward2301•11h ago•0 comments

Show HN: We built a permissions layer for Notion

https://notionportals.com/
11•PEGHIN•17h ago•6 comments

Show HN: Mantic.sh – A structural code search engine for AI agents

https://github.com/marcoaapfortes/Mantic.sh
78•marcoaapfortes•2d ago•37 comments

Show HN: DoNotNotify – Log and intelligently block notifications on Android

https://donotnotify.com/
343•awaaz•3d ago•165 comments

Show HN: Legit, Open source Git-based Version control for AI agents

5•jannesblobel•12h ago•0 comments

Show HN: VaultSandbox – Test your real MailGun/SES/etc. integration

https://vaultsandbox.com/
58•vaultsandbox•2d ago•13 comments

Show HN: 48-digit prime numbers every git commit

https://textonly.github.io/git-prime/
66•keepamovin•1w ago•54 comments

Show HN: How I generate animated pixel art with AI and Python

https://sarthakmishra.com/blog/building-animated-sprite-hero
16•sarthak_drool•1d ago•2 comments

Show HN: Prism.Tools – Free and privacy-focused developer utilities

https://blgardner.github.io/prism.tools/
371•BLGardner•3d ago•101 comments

Show HN: KeelTest – AI-driven VS Code unit test generator with bug discovery

https://keelcode.dev/keeltest
28•bulba4aur•1d ago•15 comments

Show HN: Server-rendered multiplayer games with Lua (no client code)

https://cleoselene.com/
79•brunovcosta•4d ago•59 comments
Open in hackernews

Show HN: DeepDream for Video with Temporal Consistency

https://github.com/jeremicna/deepdream-video-pytorch
61•fruitbarrel•23h ago
I forked a PyTorch DeepDream implementation and added video support with temporal consistency. It produces smooth DeepDream videos with minimal flickering, and is highly flexible including many parameters and supports multiple pretrained image classifiers including GoogLeNet. Check out the repo for sample videos! Features:

- Optical flow warps previous hallucinations into the current frame

- Occlusion masking prevents ghosting and hallucination transfer when objects move

- Advanced parameters (layers, octaves, iterations) still work

- Works on GPU, CPU, and Apple Silicon

Comments

reactordev•22h ago
Reminds me of my first acid trip.
dudefeliciano•22h ago
looks cool! i see the classic dog faces when generating video, is it possible to use own images for the style of the output video?
DustinBrett•22h ago
Looking at that video makes me sick.
noobcoder•22h ago
I remember back in 2018 we used do FFmpeg split clips into frames, hit each with GoogLeNet gradient ascent on layers thenn blended prev frame for crude smoothing
embedding-shape•22h ago
SOTA for frame interpretation today is probably RIFE (https://github.com/hzwer/ECCV2022-RIFE) as far as I know, which is fast as hell as well, and really good results still. But it's already 4 years old now, anyone know if there is anything better than RIFE for this sort of stuff today?
echelon•22h ago
This is a trip down memory lane!

I remember when DeepDream first came out, and WaveNet not long after. I was immediately convinced this stuff was going to play a huge role in media production.

I'm a big hobbyist filmmaker. I told all of my friends who actually work in film (IATSE, SAG, etc.) and they were so skeptical. I tried to get them to make an experimental film using DeepDream.

This was about the same time Intel was dabbling in 360 degree filmmaking and just prior to Epic Games / Disney working on "The Volume".

I bought a bunch of Kinects and built a really low-fidelity real time version of what Intel was working on. The sensors are VGA resolution, so it's not at all cinematic.

When Stable Diffusion came out, I hooked up Blender to image-to-image and fed it frames of previz animations to convert to style transferred anime. Then IP Adapter. Then Animate Diff.

My friends were angry with me at this point. AI was the devil. But I kept at it.

I built an animation system for the web three years ago. Nonlinear timeline editing, camera controls, object transformations. It was really crude and a lot of work to produce simple outputs: https://storyteller.ai/

It was ridiculously hard to use. I typically film live action for the 48 Hour Film Project (twice annual film "hackathon" that I've done since I was a teenager). I used Mocap suits and 3D animation, and this is the result of 48 hours of no sleep:

https://vimeo.com/955680517/05d9fb0c4f

We won two awards for this. The audience booed us.

The image-to-video models came out right after this and immediately sunk this approach. Luma Dream Machine was so easy and looked so much better. Starting frames are just like a director and DP blocking out a scene and then calling action - it solved for the half of the problem I had ignored, which was precisely controlling for look/feel (though this abandons temporal control).

There was a lot of slop, but I admired the work some hard-working people were creating. Those "movie trailers" people were criticizing were easily 10 hours of work with the difficulty of the tech back then.

I found use in model aggregation services like OpenArt and FreePik. ComfyUI is too far removed for me - I appreciate people who can do node magic, but it's not my thing.

I've been working on ArtCraft ( https://github.com/storytold/artcraft ), which is a more artist-centered version for blocking out and precisely articulating scenes.

My friends and I have been making a lot of AI films, and it's almost replaced our photons-on-glass filmmaking output. (We've done some rotoscoped AI + live action work.)

https://www.youtube.com/watch?v=Tii9uF0nAx4 (live action rotoscoped film)

https://www.youtube.com/watch?v=v_2We_QQfPg (EbSynth sketch about The Predator)

https://www.youtube.com/watch?v=tAAiiKteM-U (Robot Chicken inspired Superman parody)

https://www.youtube.com/watch?v=oqoCWdOwr2U (JoJo grinch parody)

We're going to do a feature length film at some point, but we're still building up the toolbox.

If you're skeptical about artists using AI, you should check out Corridor Crew. They're well respected in our field, they have been for over a decade, and they love AI:

https://en.wikipedia.org/wiki/Corridor_Digital

https://www.youtube.com/watch?v=DSRrSO7QhXY

https://www.youtube.com/watch?v=GVT3WUa-48Y

https://www.youtube.com/watch?v=iq5JaG53dho

They're big ComfyUI fans. I just can't get used to it.

Real filmmakers and artists are using this tech now. If you hate AI, please know that we see this more as an exoskeleton than as a replacement. It enables us to reach the look and feel of a $100+ million dollar Pixar, Star Wars, or Marvel film without the budgets we could never have without insane luck or nepotism.

If anything, this elevates us to a place where we will one day be competing with Disney. They should fear us instead of the other way around.

xg15•21h ago
> It enables us to reach the look and feel of a $100+ million dollar Pixar, Star Wars, or Marvel film without the budgets we could never have without insane luck or nepotism.

I can understand filmmakers wanting this, but less so audiences.

The problem is that then everything will look like Marvel or Pixar or Star Wars.

The other problem is that as audience, I now can never be sure if a detail was put somewhere intentionally by the filmmakers or if it was just a random AI addition.

echelon•21h ago
> everything will look like Marvel or Pixar or Star Wars.

I want to immediately push back against that notion. I cited Disney because that's what people are familiar with.

In reality, we're going to have more diversity than ever before.

Stuff like this:

https://www.reddit.com/r/aivideos/comments/1q22s3s/go_slowly...

https://www.linkedin.com/feed/update/urn:li:activity:7409063...

https://www.youtube.com/watch?v=9hlx5Rslrzk

These are bad examples, but they're just what I have in front of me.

"I've seen things you people wouldn't believe" is a fitting quote.

There's going to be incredible stylistic diversity.

> I now can never be sure if a detail was put somewhere intentionally by the filmmakers or if it was just a random AI addition.

If it's in the film, it was intentional. What you won't know is whether it was serendipity. And the fact is quite a bit of filmmaking today is already serendipity.

topocite•1h ago
I just don't agree everything will look like Marvel or Star Wars.

Everything looks like that now because the loss of DVD sales has altered the economics of movies so drastically.

These tools should ultimately empower a golden age of experimentation and unique film making.

I know for myself, I will absolutely be into this some day but it is just far too early. For me, I can see the potential of AI generated video but that is something for the 2030s. Right now it is like the digital audio workstation in 1992. It needs another decade to mature.

manbart•21h ago
I really enjoyed "Take Out," I can't believe people booed it! I especially liked the fortune cookie scene
echelon•21h ago
Thank you so much!

It was gut-wrenching to hear after a weekend of sleepless effort. This was a community I've been a part of for decades, and I've never had my work received like that before.

But at the same time, this was right after the strikes and just as AI was looking like it might threaten jobs. I totally and completely understand these folks - most of whom are actively employed in the industry - feeling threatened and vulnerable.

In the past two years the mood has definitely lightened. More filmmakers are incorporating AI into their work and it's becoming accepted now. Of the films that used AI in last October's 48 Hour Film Project, none of them were booed, and a few of them won awards.

We animated all of the gore in ours with AI (warning - this is bloody and gory) :

https://drive.google.com/file/d/1m6eUR5V55QXA9p6SLuCM8LdFMBq...

(This link will get rate limited - sorry. An untold volume of indie films are on the "deep web" as Google Drive links.)

We really had to rush on this one. We only had 24 hours due to a scheduling snafu and had to roll with the punches. The blood didn't do exactly what we wanted, and we had to keep the script super tight. Limited location, serendipitous real party we crashed with a filmmaking request, ...

These are like hackathons and game jams.

xrd•18h ago
I have to say, I'm enthralled by your two comments and what you've shared. My favorite director is Richard Linklater, mainly because his storytelling is incredible, but at a close second is the way he has pushed the boundaries of narrative with technology like rotoscoping (and which you referenced). This is a fascinating thread. I'm definitely a fan of your work.
CyberDildonics•20h ago
you should check out Corridor Crew. They're well respected in our field,

In what field? They're youtubers, none of them have held a single job in visual effects. They make react videos. They aren't in effects, their industry is youtube.

seanw444•19h ago
And they're pretty good at what they do.
CyberDildonics•19h ago
What they do is react videos to things they don't understand, so it would be crazy to look to them as vfx leaders when they haven't even had an entry level job.

That would be like betting on someone who watches boxing every weekend to win a professional fight.

seanw444•18h ago
I don't know what their current video catalog looks like, but they used to do a lot of fairly impressive VFX videos themselves about a decade ago.
imiric•17h ago
That's flat out wrong. They have produced several web series, and their videos feature a lot of visual effects. Just because their career focuses on producing web content doesn't mean they're any less talented than someone working on feature films.

I can't comment on whether they're "well respected" in the VFX industry, but you're being misleadingly hostile.

CyberDildonics•16h ago
I can't comment on whether they're "well respected" in the VFX industry,

They aren't, because they aren't in the vfx industry.

you're being misleadingly hostile.

No, this is honesty. People who only know vfx through fake youtubers want to defend them, but it's the blind leading the blind for clicks and views.

Just because their career focuses on producing web content doesn't mean they're any less talented than someone working on feature films.

They built their channel criticizing people who work on feature films. Their work is good according to them and acolytes who buy into it, but people who think they represent vfx don't realize this and suddenly it isn't fair to point out the truth.

imiric•16h ago
> No, this is honesty.

No, it's bullshit.

From Wikipedia[1]:

> Corridor Digital LLC is an American independent production studio based in Los Angeles, known for creating pop-culture-related viral online short-form videos since 2010, as well as producing and directing the Battlefield-inspired web series Rush and the YouTube Premium series Lifeline. It has also created television commercials for various companies, including Machine Zone and Google.

You clearly have some bone to pick with them, but they're accomplished VFX artists. Whether they're good or not is a separate matter, but they're not "fake youtubers" or misleading anyone, unlike yourself.

[1]: https://en.wikipedia.org/wiki/Corridor_Digital

CyberDildonics•13h ago
Making web videos doesn't mean that they are able to do the visual effects that they criticize.

They don't make "vfx artists react" videos to low grade web series.

They call themselves vfx artists when they have never done that. They make web videos, they criticize professional work and you are completely ignoring that.

chabes•19h ago
The artist Panda Bear has a music video from this time called Crosswords. Uses deepdream about halfway through. I thought it was pretty groundbreaking when it came out a decade ago. Seems much more tame now, by today’s standards, but still gives me the feels

Edit: link https://m.youtube.com/watch?v=2EXslhx989M&pp=ygUUcGFuZGEgYmV...

RIMR•15h ago
I always thought that these models were going to revolutionize video compression. You would have something like a ~10GB compression model, and you could compress a 4K movie down to 500MB or something playable by anyone else with the model.

Maybe that's still going to happen?

krapp•14h ago
Why would Disney fear you, though?

They'll still own all of the money-making IP that your models are trained on. You won't actually be able to get away with "X in the style of Disney or Y in the style of Pixar." And they'll have their own in-house models, far more expensive and powerful (likely through regulation) than you'll be able to afford.

You aren't talking about competition, you're talking about emulation. Putting an inferior version of existing properties on the market like The Asylum does.

I watched all of the links you provided. The "live action rotoscoped film" didn't need AI. They could have rented a knight costume or did that in AE, it wouldn't have even been that expensive. It wasn't even a good example of what rotoscoping could do - and I can actually accept rotoscoping and motion capture as a legitimate use of LLMs.

The concept for the Predator sketch was lame and it's ripping off the style of an Joel Haver, an animator with actual comedy talent. That legitimately kind of makes me mad. That's exactly the sort of thing that makes people hate AI. You aren't even ripping off a corporation here, which I could at least respect in abstract.

The Superman parody was generic and derivative, but it looked competent. Didn't really look like Robot Chicken though. The way they did the mouths was weird. And the whole plot revolved around a problem that was never actually a problem in the franchise.

The Grinch "anime" didn't look like a good anime. It looked like the kind of thing people criticize when an anime studio cuts costs for a second season. Still frames and very little animation. Inconsistent and generic style.

The horror movie posted below? The "blood" looked awful a lot of the time. The cinematography, as such, didn't carry the story at all. It isn't shot or cut the way an actual movie would be. The actors weren't compelling, the script was tepid.

Understand, I'm really trying not to just shit on this because it's AI, I'm trying to approach it as art because that's what it purports to be. And I can concede that the technical capability of AI has advanced dramatically in this field. They did a Robot Chicken and they did a Joel Haver and I saw a 90s cartoon but... it's bad art. I see no sense of actual unique creative vision anywhere. No sense of an actual artist trying to express something. Nothing that tells me "this could only have been done by AI, and would have been impossible beforehand."

It's like AI people think all you need is the aesthetic, and that the aesthetic they're imitating is a suitable replacement for the actual talent that went into it, but it isn't.

kieojk•20h ago
As the name of the model suggests, the generated videos are full of dreams.