4 minutes read | 733 words by Ruben BerenguelSome links are affiliate links
Looks like my mojo is coming back.
I have submitted a presentation proposal to 3 big data conferences (haven’t heard back yet from any, but CFPs are still open), and I’ll be talking a bit about Haskell and parser combinators at Hybrid Theory’s lunch-and-learn (which we call time of technical tales since it’s not during lunch). Yesterday I wrote the skeleton I want to explain on Friday, and I will probably polish it for anyone’s use and publish in the coming weeks.
It was pretty good. If you have been a teacher at a university some of the performance review “rules and tips” are painfully obvious. I’d give it 4.5 ⭐️, wins half a star by quoting a song by Meat Loaf.
A comparison of value shrinking in Hypothesis (Python) and Quickcheck (Haskell) by the Hypothesis author, from 5 years ago. The claim that Quickcheck’s is bad and Hypothesis' is good, then, needs to be taken with a grain of salt (check the comments at the time in r/haskell). For example, I’m surprised about the mention that having shrinkers for types does not extend to complex datatypes: I would expect this to be derivable.
I have only skimmed it, since I’m no longer in deep-down performance improvements (and never was too much into it). I kind of knew how to have good performance and some stupid tricks in the Pentium II/III era (how to avoid some pipelining issues, or keeping speculative execution happy), but nowadays microprocessors do too many things, as you will see in this article.