In order to be able to design equipment, the instrumentation generally needs to outperform the equipment, sometimes by a significant margin. If I'm looking at the eye of a digital signal, I need to capture much faster than the signal.
It'd be fun to have a book of tricks from this era. At some point, it will fade into obscurity. Right now, it's a whole different bag of tricks for the state-of-the-art. They feel less... more textbook and less clever.
On the other hand, what's nice is that in 2025, decent equipment is cheap. There's a breakpoint where below around 100MHz, you can't do basic work, and above you can. That's roughly where FM pickup and a lot of oscillations sit. That used to cost a lot, but as technology progressed, we're at a point where a decent home lab can be had for well under a grand.
I think you'll get a kick out of:
"Analog Circuit Design: Art, Science and Personalities"
https://www.amazon.com/Analog-Circuit-Design-Personalities-E...
In order to be able to design equipment, the instrumentation generally needs to outperform the equipment, sometimes by a significant margin.
Flashback to my days as beginning TLP engineer. I was subjecting ESD protection structures to kV pulses with ~nanosec rise-time. The oscilloscope measures the pulse as it enters and reflects. You’d increase the voltage until the device breaks and do a wafer mapping. I remember a conversation where I showed the setup to a colleague from a diff department, telling him we’re developing next-gen protection against static discharges.To which he replies: why don’t we use what the guys from the oscilloscope are using?
Though it isn't a book, the Hewlett-Packard Journal is a gold mine for this type of content: https://web.archive.org/web/20120526151653/http://www.hpl.hp...
E.g. An 8-Gigasample-per-Second, 8-Bit Data Acquisition System for a Sampling Digital Oscilloscope (October 1993): https://web.archive.org/web/20120526151653/http://www.hpl.hp...
I paid the full price for it (>$2000), but almost all Rigol scopes have lower and higher BW versions with the same chassis and electronics. You can buy the lower BW versions for a fraction of the higher one and apply a software patch to unlock the higher BW.
This has been possible for years. Despite many SW revisions, Rigol has never made an effort to block this. I think they know that in the grand scheme of things, they make more money this way.
So research 100MHz Rigol scopes and check if they are hackable. Chances are high that they are.
It's all about extracting as much as they can from each customer, but I'm happy they're willing to allow "willing to do hacking stuff" to be a market segmenter for them.
Unfortunately, I think the GP post question of a device a hobbyist can't exceed has a negative answer. I really like my DHO924S and there are a huge number of tasks where it is way more than enough (and it's very portable too). But around me there are kinds of computer and video and radio devices that run much higher than 250MHz, and going to 350 or 500MHz doesn't really change that fact. Scope prices go pretty exponential after a couple hundred MHz. ... so if you want to be snooping on a SFP+s sfi signal, USB3 some hdmi thing or whatever via a scope anything but a lucky surplus find is unlikely to be inside a hobby budget.
Yet I think it's totally reasonable for a hobbyist to want to work on the high speed digital signals that surround them in their own home.
(hobby solution to fast digital busses is to make custom boards with inexpensive FPGAs I guess, rather than using a 5+-figure oscilloscope)
Part of the reason is that the specifications for these fast interfaces are created for robustness. They can sustain a lot of PCB design abuse and still work fine in practice even when out of spec.
I mean, I think this would be a very nice project for someone with hardware skills and some time on their hands, and it would be useful too.
I've also looked at the existence of CCD memory, but it doesn't seem to be a thing anymore. I didn't find any such modern chips.
I think they can be used with analog signals too. After all, they are (I suppose) just a chain of transistors holding charge, like in a CCD. It is just the most efficient implementation, since doing it with logic gates will bring more overhead. EDIT: maybe not, it seems at least some of these chips have digital input/output stages. Maybe it could work if you put a very fast 8-bit ADC in front of the delay line, and used 8 delay lines, one for each bit? :)
Anyway, I have totally zero experience with these chips.
But I can imagine you clock-in the signal using a fast clock (maybe the internal clock), and then you clock-out the signal using a slow clock (slow enough for a subsequent ADC chip).
Also, perhaps you can put a bunch of these combinations in parallel to increase bandwidth, or to increase sample depth.
-not quite the same but similarly novel.
Indeed. Even if you have an instrument, and the skill to operate it and produced a meaningful result that reveals some problem, that doesn't spell out a solution. It may help, but high frequency is subtle.
If you're designing a board, not being able to look at its signals is a major limitation. Is something wrong with the transmitter, receiver, cable, connector, pcb, firmware, driver? Who knows! It doesn't work, and that's all you're going to get. Have fun randomly tweaking stuff in the hope that it is magically going to work.
There seems to be no way to debug this stuff unless you are a big company that can pay large sums of money.
There can even just be manufacturing defects the FR4 weave that mess up SI that you might want to check for as a QA step on the assembly line. For hi volume that gets slow or expensive
Yes, but: You don’t need a general purpose realtime high bandwidth scope for verifying signal integrity.
You can use an equivalent time sampling scope (~10ksps to 100ksps) to measure the eye diagram and ensure that the received eye matches the mask specified by the spec.
The HP logic analyzers back then had a really neat touchscreen interface based on criss-crossing infrared beams in front of the CRT face. The only thing that I've ever used that felt even better than a capacitive touchscreen, though obviously lower resolution.
As for the HP touch screen, I tore down a bunch of HP 16500A logic analyzers and reverse engineered the touch screen PCB. It uses a pretty simply LED/photo sensor matrix. You can see the PCB in one of the pictures here: https://tomverbeure.github.io/2022/06/17/HP16500a-teardown.h....
I remember pulling a 486 out of its socket in the 1990s and putting it back with the wrong orientation. There was a poc and a bit of smoke. Something on the mainboard had burnt and it wasn't working anymore.
I used smell to locate the fault, a big trace on the PCB, which I soldered back and magic, it all worked again...
Uh oh. We needed that board. What to do? Well, it can't hurt to try. We had "freeze spray" for debugging purposes, so got a bottle of white-out handy (what's that?), frosted up the board really well on the component side, powered up, and quickly marked the devices that defrosted notably quicker than the rest.
Got the solder station lady to replace all those parts and it worked again.
Old days...
I have an entry level standalone oscilloscope that i got but never used. I once looked for tutorials and unpacked it, ready to test, but:
It's covered in that kind of plastic that goes all gooey if left unattended for a long time.
Any hints on how I can clean it up so i can touch it again?
The biggest use case for this is sensor interfaces where the signal is still analog (not passed through an ADC yet). Voice recognition is a typical example where analog neural networks are used to certain level of success, and now people are pushing for insge recognition but the architecture of a digital camera isn't compatible with that, so I don't see much happening there.
Funny fact is these kind of circuits are used in analog portion of the chips to implement rather complex calibration/training loops (correlation, LMS optimization, pattern recognition etc) heavily since early 90s. There's a lot of analog computing happening in every SoC.
Thinking about it, I might still have the device somewhere in the attic.
I recently got a TDS684A and also made the same discovery, and wrote half a blog post about it which remains unfinished/unpublished. I don't have much of an EE background (at least, not on this level) so my article was certainly worse. It's also my only decent scope, so I don't have a good way to take measurements of the scope itself.
Relatedly, I dumped the firmware of mine (referencing Tom's notes on a similar model) and started writing an emulator for it: https://github.com/DavidBuchanan314/TDS684A
It boots as far as the vxworks system prompt, and can start booting the Smalltalk "userspace" but crashes at some point during hardware initialization (since my emulation is very incomplete!) - some screenshots/logs: https://bsky.app/profile/retr0.id/post/3ljzzkwiy622d
Edit: heh, I just realised I'm cited in the article, regarding the ADG286D
Because that's more than enough for scanning a screen-width's-worth of samples from the analog CCD snapshot.
In a digital camara, the CCD columns basically capture the image instantaneously. It has an infinite sample rate!
Then the data is shifted out the CCD some rate that basically doesn't matter, as long as it isn't so slow that it takes seconds.
formerly_proven•2mo ago
Is this maybe using some form of correlated double sampling?
> It looks like the signal doesn’t scan out of the CCD memory in the order it was received, hence the signal discontinuity in the middle.
Or maybe the samples are also interleaved in the low-order bits in some way. This could be because the organization of the CCD isn't symmetric for the input and output paths, perhaps to reduce area or power, since only one path has to be fast. This would make sense because if you implement the CCD using n parallel CCD bucket brigades you only have to put a fast S&H and multiplexer in front of it, then you can drive the CCD brigades at a fraction of the actual sample rate, and the capacitive load of each of those clock phases is much lower as well.
dp-hackernews•2mo ago
Oversampling Versus Upsampling: Differences Explained https://www.soundstagenetwork.com/gettingtechnical/gettingte...
tverbeure•2mo ago
Some people have also suggested deliberate addition of a pseudo-random signal that gets removed after sampling to counteract some CCD issues. But I don't know how that would work either.
monster_truck•2mo ago
formerly_proven•2mo ago
photon_rancher•2mo ago
For example: You add a dithering signal which can be processed out. If the signal has the right properties (for example, random but evenly distributed noise bounded to one LSB), you can then average out multiple samples to get more effective resolution than the ADC has. The additional number of bits scales something like 2^n samples, although if you don’t take sufficient samples this mainly just reduces your SNR. It also requires a periodic input.
However you can also pull similar tricks in the time domain or using simultaneous sampling with multiple ADCs. You can also interleave slower ADCs with a phase shift. This produces stitching artifacts unless you average them out though because ADCs generally are not well matched at the limits. You can bin or calibrate this out somewhat if you can characterize the error.
You can do a similar thing in the frequency domain if the ADC sample window is narrow enough but it has arguably the worst artifacts. Lo-pass the first ADC around N/2. The second ADC use a bandpass from N/2 upto N. The third is N upto 3N/2 etc… but the fourier transform will have a bunch of junk at the stitching points.
Or you can take the sampling scope approach using a fast but low sample rate ADC and many triggers.
I’ve seen most of these done on commercial instruments if you dig into the settings. Some of them you can see in normal operation (like the stitching in the frequency domain).
But I think the other poster was suggesting the first case applies - if you think about it there are certain periodic signals you can add instead of a random signal. That has the advantage of limiting SNR degradation and can also be filtered out easier/ detected i n the data.
tverbeure•2mo ago
It's a repeating pattern of ~195 samples so it's easy to figure out this pattern during calibration and subtract it from the values measured by the ADC.
In addition to that, there is also some interleaving going on.
crote•2mo ago
I wouldn't be surprised if the CCD has all sorts of funky analog stuff going on internally which has different impacts on different samples, which would be incredibly hard to deal with on its own.
However, if this behaviour is merely a fixed offset, it would be fairly easy to compensate for this on the digital side: just do a calibration with a known signal, and the measured offset can be used to reverse its effect in future sampling.
tverbeure•2mo ago
Another possibility is that there's some charge decay which you could calibrate for.
Retr0id•2mo ago
tverbeure•2mo ago
Is the smalltalk program stored? Some kind of byte code or the original source code?
Retr0id•2mo ago