I never used TLC with a large model, but I bet making the tool faster would make it more useful to lots of people.
I wonder what the speedup would be if the code targeted a GPU?
But I wonder if the logic of model checking is actually amenable to vectorization. I suspect not really, even for something basic like checking safety properties where you could try to shard the state space across cores. There is still likely to be some synchronization that is needed that eliminates the benefits. A cheaper way to test it would be to look to vectorize on the CPU first.
For a pure hardware based speedup, if there is effort to transcompile TLA+ specs to C++, there could then be a further step to transcompile that to say Verilog and try to run the model checking on an FPGA. That _might_ pay off.
So formal methods may be used for extremely critical parts of systems (eg; safety critical systems or in embedded where later fixes cannot be readily rolled out) but they fail to make inroads in most other development because it's a lot of extra work.
But yeah, that's basically the answer to the conformance problem for these sort of lightweight formal methods. Trace validation or model-based testing.
[0] https://buttondown.com/hillelwayne/archive/requirements-chan...)
0. https://conf.tlapl.us/2020/11-Star_Dorminey-Kayfabe_Model_ba...
it is probably better to try matching an implementation to a spec you know works, and debug that. than trying to match an implementation to an "in your head" spec which might not be correct in the first place. debugging the spec then the implementation seems to make more sense than debugging both at the same time.
a more vicious issue could be, does your model... models... enough reality for it to be relevant. because you can end up with a correct spec, a correct implementation, but in the end the cow wouldn't be round enough. but i don't know... i have too little experience in the domain.
Damn. Happened a 10 minute drive from my house and I didn't even know about it.
TLA+ is on the infinite bucket list for me. I'm sure like many others, I know the value of learning and applying formal verification, but it feels impenetrable knowing really how to jump in.
Personally I learned by reading the first few chapters of Specifying Systems.
The PlusCal-to-TLA+ transpiler is considered part of the core TLA+ tools and will definitely keep being maintained.
[0] https://conf.tlapl.us/2020/03-Heba_AlKayed-An_Extension_of_P...
It surely did produce a fancy, mathy PDF which I proudly shown and explained to my team, but honestly, a little duck-talking would have permitted to find the same bug without TLA+
For aome context, it's a CRUD api + web interface for external clients, nothing too complicated, and I really wanted to try TLA+ in real life
Truffle [1] can convert an interpreter to a JIT compiler -- there's no need to invent bytecode, and instead of an interpreter you get compilation to native, and it's easy to add intrinsics in Java; optimisations can be added gradually over time. This would probably be the most effective strategy, even in general, and certainly compared to cost.
[1]: https://www.graalvm.org/latest/graalvm-as-a-platform/languag...
Edit: okay the more I look into GraalVM the more impressed I am. I will have to sit down and really go through their docs. Oracle was actually cooking here.
A few years ago I tried making some contributions to the TLC codebase. It was definitely "academic code," lacking tests, reinvesting basic structured instead of using them from libraries, and largely the work of a single contributor with seemingly no code reviews for commits. I was motivated to try to help improve things and wanted to get a sense for what that would be like by sending a small PR to get a feeling for working with the code owners. They basically stonewalled me. It was odd.
Modeling languages are useful to check the correctness of an algorithm during development. During development, a model can also serve as a specification for the actual implementation. This requires that your the modeling language is readable to a broad range of developers, which TLA+ is not. We have been experimenting with FizzBee (fizzbee.io) which looks promising in this regards.
When you go to prod, you really want to test your actual implementation, not a model of it. For this you want something like https://github.com/awslabs/shuttle (for Rust), or https://github.com/cmu-pasta/fray (for Java). Or use something custom.
This is Quint [1], a different syntax for TLA+ with some extra tooling (type checker, CLI, evaluator, REPL, VSCode extension, testing framework, etc) which can be transpiled to TLA+ (which is a very direct translation, as the semantics is the same [2]) and therefore make use of the TLA+ tools as well (mainly the model checkers).
I think this is far from the same level of "unreadableness" than TLA+, and it makes formal methods much more approachable. It would be great if you could take a look and tell me whether you agree.
[1]: https://quint-lang.org/ [2]: https://quint-lang.org/docs/lang
The most frustrating part it's hard to use with TLA+ background. I know how to do something in TLA but have no clue with Quint because translation rules aren't direct and obvious.
On the other hand it's a way better than PlusCal!
But I'm heavily biased. Please take this "critique" as a mumble from TLA+ initiated duckling.
Most of the documentation doesn't mention TLA+ anymore, as it is focused on new programmers coming to formal methods with no prior experience with it. I agree it was very confusing before!
Other than that, if I can argue against some of your points: 1. Quint brings more than just type checking (better IDE diagnostics, better REPL, better CLI, and runs that work like tests) [2] 2. The translation rules between Quint and TLA+ are actually very straightfoward [3], specially if we stay in the common idiom of TLA+. 3. This is more subjective, but a few people using Quint have reported to really like isolating the state machine into the "action" mode and then defining the protocol in "pure def"s. I understand what you mean by "leaky" and a part of me agrees with that, but, in practice, we are seeing real benefits on this side.
Again, thank you for your points - there's some stuff in your feedback I haven't heard before, and it's great to have a new perspective.
[1]: https://quint-lang.org/docs/language-basics
[2]: https://quint-lang.org/docs/faq#how-does-quint-compare-to-tl...
do you cover all of that TLA can do??
TLA syntax is simply a pain to type on my keyboard and it has been a hurdle for the longest time...
\A x \in set: x.id /= 1 /\ ~x.active
vs all x in set: x.id != 1 && !x.active
The latter has the same semantics but is much easier to read for the average developer, and more importantly is easier to type without syntax errors.It has some chapters that give very precise definitions of how things work in TLA (both on a language level as well as model checking), which answered some questions I often had along the way in more informal introductions.
femto2151•1mo ago
magarnicle•1mo ago