Oh, no wonder this is so comprehensive and fearless. It's Andrew Zonenberg.
After MCHP bought them and opened up (what I thought was) the full datasheet I gave them a second chance. Seems they still held some back.
I wonder if there are certain elements in certain "industrial complexes" that need to maintain or interface with legacy TR systems and that's why it's still hanging around in "dark silicon".
Audio is usually soft realtime, sometimes, e.g. when doing studio recordings, firm realtime.
Some audio setups are run quite "close to the metal", both because it needs less buffering, but also the lower human threshold for noticing latency seems to be around 10ms. And having audio not get out of phase with multiple sources/sinks gets added on top of that.
While not technically TR, it does use a token that moves from device to device.
It would be interesting to know if TR is better at contention management than broadcast ethernet - which nobody does anymore because everyone uses switches.
> While not technically TR, it does use a token that moves from device to device.
I assume you typo'd DOCSIS there, but no, DOCSIS does not use a token; it uses separate channels for down- & uplink, and the uplink channels are TDMA and/or CDMA depending on DOCSIS version.
It was eye-opening.
- you can charge money for things
- anything that's not built with the "official compiler" is not "supported"
I've interviewed for a junior embedded software engineer when i was in university and when i started mentioning i had experience building cross-compilers i was immediately stopped by the guy interviewing me (he literally didn't even let me finish the sentence) and told me "Absolutely no. We don't want to maintain our own toolchain and we want everything to be coming from the BSP [Board support package] and integrated nicely with the vendor's IDE.They used ARM chips, so not even anything strange...
The real issue would come if they did not provide the source code for the gcc build they sell you, though.
Related, compiler bugs aren’t uncommon in the arm-none-eabi family. Especially the cortex m0 seems to have a few that recur every few years.
This is critical if you want any support from the vendor.
If you come to them with a bug in their hardware but you’re not using their toolchain and BSP, it’s the end of the road. You have to recreate a minimal reproduction of the bug in their ecosystem before they’ll look at it.
When you’re working at company scale, paying $1000 for a compiler is a trivial expense.
From my perspective, it's much better to reproduce a bug with a 20-line C or assembler file that compiles with upstream gcc, completely ruling all of their custom stuff out as the root cause.
Just tell me what the silicon does when I poke this register and I'll work around it.
Mhm. I'd say, you're forced to reproduce it on their toolchain and BSP, which may or may not be the end of the road depending on how complex the problem and your use case are.
(I am aware that there's a certain kind of mindset that likes to lean on support from vendors to do basically anything, and I think if you're in a position where you actually get good support, that might work, but in most of the instances where I've seen such a mentality it tends to produce expensive results that still don't actually work, and sometimes even when AFAICT the vendor's pretty switched on, they just don't actually have all the context)
If it's actually gcc, a copy of the GPL should have come with the software. A bunch of other compilers mimic a lot of its interface for compatibility’s sake.
I liked working with Microchip uC, but this was back when the whole IC (PIC24) was described in a single ~1000 page document. I found it very readable and instructive in general.
If I had to pick something today it would be with RP2040/2350. The docs look awesome and there's a huge community that is not locked down in some corporate moderated forum but spread organically, with actually useful GitHub projects. It is the only embedded product where it felt like the open source community is along for the ride and not just picking up the scraps. I hope they continue this line of products.
The PIC24 was actually my first large project. I learned awful lot from reading its docs, for example setting the DMA to read 32 samples from ADC and let CPU know when done. Putting it together felt like playing with LEGO blocks. There were many annoyances with the toolchain and the clumsy memory addressing but I enjoyed it overall.
The NXP was downright unpleasant compared to it. I don't think a junior could be handed a NXP dev board and all the docs to hone their craft. It requires significant patience and expertise to pick out the relevant details in the vastness of their documentation. Of course the NXP product line is huge and I can only comment on few uC models I had contact with. The sensors and other less complex ICs were vastly better and docs were quite digestable.
(But presumably that agreement also restricts Google from redistributing the binaries anyway.)
If anyone can suggest others I would be grateful.
I've heard good things about Nordic, though. Might try them out at some point.
Microchip's own IDE and project generator spit out a hello world project that didn't even compile. NXP wouldn't even let me download their tooling even after their obfuscated sign up flow.
I'd love to hear stories of what it's like to work with chips from these companies.
If you're working at one of the big companies (e.g. Microsoft), they'll give you access to the documentation and source code that should be open for everyone, but even then you're going to spend time reverse engineering your own documentation because trying to get details from them is a months long process of no one being willing to say yes. It's painful. Best to stay away unless you have no other alternatives.
For larger volumes, ~100,000 you get to talk to a distributor yourself and design your own pcb. You still won't get to talk to anyone at BigSemiCo, but you will get access to datasheets and (probably) drivers. You will have to sign an NDA.
For their largest customers, they go all out. The customer gets significant input into the design roadmap years in advance. They can get cost-reduced versions of existing parts that leave off blocks they aren't using. reference board designs and example software are provided (to the extent that low-margin, enormous volume customers sometimes just change the html logo and ship). If the product needs integrating, Field Engineers will be flown out to assist.
There are some levels between these last two, where they will talk to you but not invest as much.
Take the above numbers with a pinch of salt; it's been a while since I was in that industry
When I first started the project in 2012-13, Vitesse was just as NDA-happy and I ruled them out. The original roadmap called for a 24-port switch with 24 individual TI DP83867 SGMII PHYs on three 8-port line cards.
I poked at a vsc73xx-based switch in the past and wrote my own test firmware, but had problems with packet loss since I didn't do all the necessary phy initializations I guess, in case this might be of interest: https://github.com/ranma/openvsc73xx/blob/master/example/pay...
Also on the device I had the EEPROM was tiny and the code is loaded from EEPROM into RAM, you were pretty much stuck with 8051 assembly that had to fit into the 8KiB of onchip RAM :)
A while back I tried out Espressif's esp32 and I was impressed by what they were offering. Their devices seem to be well documented and the esp-idf framework is really pleasant to use. It's much easier to work with than STM32Cube and ST's sprawling documentation.
Personally I've standardized on just three STM32 parts:
* L031 for throwaway-cheap stuff where I'm never going to do field firmware updates (so no need to burn flash on a proper bootloader) and just need to toggle some GPIOs or something
* L431 for most "small" stuff; I use these heavily on my large/complex designs as PMICs to control power rail and reset sequencing. They come in packages ranging from QFN-32 to 100-ball 0.5mm BGA which gives a nice range of IO densities.
* H735 for the main processor in a complex design (the kinds of thing most people would throw embedded Linux at). I frequently pair these with an FPGA to do the heavy datapath lifting while the H735 runs the control plane of the system.
This is the approach I took at my last job: we standardized on a small handful of CPUs selected for a certain level of complexity. Before this, choosing a CPU was an agonizing task that took days and didn't add a lot of value. The only time it actually mattered was the one time we got an order of several 100,000 units. In that case, you want to get the BOM cost as low as you can.
Trying to get the same thing implemented at my current job. I'm seeing the same behavior where a team takes forever to choose a processor, and a "good enough" choice would have taken a couple of hours.
I do remember trying different browsers and even different machines, to no avail. Quickly gave up.
ST has good documentation most of the time, but for a while some of their higher end MCUs had a lot of weird bugs and errata that were simply not documented. I haven’t used any of their modern parts recently but I’ve heard the situation has started improving. I have some friends who were ready to abandon ST altogether after losing so much time on a design to undocumented bugs and parts not behaving as documented.
I haven't been bit by an undocumented silicon bug, but I step on documented STM32H7 bugs on a pretty regular basis and there are some poor design decisions around the OCTOSPI (in addition to bugs) that make me avoid it in almost every situation.
But at least they document (mostly correctly) the registers to talk to their crypto accelerator unlike the Renesas and NXP parts I looked at as potential replacements, both of which needed an NDA to get any info about the registers (although they did supply obfuscated or blob driver layers IIRC).
I also really like ST. At a previous job our go-to processors were Nordic for wearables or anything that needed BLE, and STM32 for pretty much everything else. Wasn't unusual to have an STM32 for all the peripheral I/O and an nRF52 hanging off an I2C port just to talk to an app.
Nordic is OK. Starting up a new project is nowhere near as easy as STMCubeMX and they do tend to update their SDKs frequently which can be annoying if you have to support legacy projects, but we used them for years with no problems.
I've yet to see anyone here talk about silicon labs microcontrollers. Why's that?
How do you upsell a hardware engineer who just wants to buy a specific chip, and already has everything to evaluate and use it? You don't. So you force everyone to go through sales, and then sales wants to talk to non-engineering higher-ups, and then the upsell happens - while the people who actually knew what they wanted remain as far away as possible.
And if you don't have the pockets deep enough for the sales dept to acknowledge your existence, then you might as well not exist.
If you even promise to buy a few hundred a year through a business, it puts you in a different category and everything gets much easier, but you usually have to go via a distributor (Avnet, Future, Arrow etc.). But if you’re big enough (the hundreds of thousands + qtys) these companies will actually send dedicated support engineers to work with you and help you integrate their parts into your product.
Dealing with small clients is not a priority for most part vendors. Many of them won’t even sell you chips at all until you can qualify yourself as a big customer or, in some cases, buy a license to start designing with their parts for six figures or more.
Unfortunately for the small players, it’s not a priority for most companies to support small customers who might only buy a couple thousand parts or less.
If you want to sell or limit support, why not do that without the documentation complications?
Yes, absolutely. Notoriously, smaller customers are more needy in fact. The bigger the customer, the more competent their engineers tend to be (or, the more time they have to spend figuring out how to use your stuff). Smaller customers try to offload support onto vendors, which pushes burden onto internal vendor teams (who don't want to provide the support...).
Maybe. The other thing is that public documentation gets a lot more scrutiny than internal documentation. You don't have any resource to talk to, so something like typos or mistakes need to be corrected rather than just papered over by a helpful applications engineer.
The ATF15xx have BSDL files released, but that's only for testing/bypass.
- a family guide describing all features of microcontroller family, usually >500 pages long
- concrete microcontroller guide describing specifics of a single microcontroller, usually >50 pages long
- errata guide describing all(?) known silicon bugs with their workarounds
Also, Clang has a backend for MSP430 by default: `clang -print-targets`
As the Author demonstrated, the network IC world is very unaproachable.
anitil•7mo ago
This was an impressive amount of research to get what he wanted out of the device!
Xss3•7mo ago