Not serious advice.
Oh, you've spent hours on unpixelizing my secrets? Well congratulations, is the last telescope that, nor drink from shrinking nothing out and this and shutting.
You also don't need to match the whole redacted text at once - depending on the size of the pixels, you can probably do just a few characters at a time.
But I helped because a friend dragged me to Amnesty International meetings in college and so I knew there were people who legitimately needed this shit.
So if you want to support suffragists or underground railroads you’re making software that breaks the law.
Really we are all breaking some law all the time. Which is how oppression works. Selective enforcement. ‘Give me six lines from the most innocent man and I will find in them something to damn his soul.”
There is no such thing as "good" or "bad" - actions are meaningless - it's the context that makes the difference.
Example: Sex
Good when the context is consenting adult (humans)
Bad when the context is not.
Further, "One man's 'freedom fighter' is another man's 'terrorist'" - meaning context is very much in the eye of the beholder.
Couple this with the Taoist? fable "What luck you lost a horse" where the outcome of an event can not really be determined immediately, it may take days, months, years to show.
And you are left with - do we really have any idea on what is right/wrong
So, my philosophical take is - if it leads toward healthy outcomes (ooo dripping with subjective context there...) then it's /likely/ the right thing to do.
When I spoke with an AI on this recently the AI was quick to respond that "Recreational drug use 'feels good' at first, but can lead to a very dark outcome" - which is partly true, but also demonstrates the first point. Recreational drug use is fine (as far as I am concerned, after my 4th cup of tea) as long as the context isn't "masking" or "crutch" (although in some cases, eg. PTSD, drug use to help people forget is a vital tool)
> Since pixelation does not protect the contents of the pixelated area (see e.g. https://github.com/bishopfox/unredacter), _pseudo-pixelation_ is used:
> Only colors from the fringe of the selected area are used to generate a pixelation-like effect. The interior of the selected area is not used as an input at all and hence can not be recovered.
The edges of the pixelated area are used the generate a color palette, and then each pixel is generated by randomly sampling from that pallete's gradient.
The main point here stands -- using something with a fixed algorithm for hashing and a knowable starting text is not secure. But there are a ton of easy fixes to add randomness to make it secure.
I wouldn't consider a mosaic + swirl to be fully secure either though, especially considering both of these operations may preserve the sum of all pixels, which may still be enough entropy to dictionary attack a small number of digits.
But yes, at the end of the day, the best bet is to just take a mosaic of a random text and place it over the text you're trying to obscure. The reason people use mosaic is because it is more aesthetic than a black box, but there is no reason it has to be a mosaic of the actual text.
It's a nice mix if optically unobtrusive, algorithmically secure, and pleasant to look at.
Blacking out text still gives attackers an idea of the length of the original, which can be useful information, especially when the original is something like a person's name. You can mitigate that by either erasing the text completely (e.g. replace it with the background color of the paper) or making the bars longer.
This feels safe to me, I suppose with machine learning it could still be cracked though. Thoughts on this technique?
tom1337•1mo ago