Speeding up atan2f
jacobolus 2021-08-17 17:01:12 +0000 UTC [ - ]
tanpi_2 = function tanpi_2(x) {
var y = (1 - x*x);
return x * (((-0.000221184 * y + 0.0024971104) * y - 0.02301937096) * y
+ 0.3182994604 + 1.2732402998 / y);
}
(valid for -1 <= x <= 1)https://observablehq.com/@jrus/fasttan with error: https://www.desmos.com/calculator/hmncdd6fuj
But even better is to avoid trigonometry and angle measures as much as possible. Almost everything can be done better (faster, with fewer numerical problems) with vector methods; if you want a 1-float representation of an angle, use the stereographic projection:
stereo = (x, y) => y/(x + Math.hypot(x, y));
stereo_to_xy = (s) => {
var q = 1/(1 + s*s);
return !q ? [-1, 0] : [(1 - s*s)/q, 2*s/q]; }
leowbattle 2021-08-17 17:47:37 +0000 UTC [ - ]
I was just learning about stereographic projection earlier. Isn't it odd how when you know something you notice it appearing in places.
Can you give an example of an operation that could be performed better using stereographic projection rather than angles?
jacobolus 2021-08-17 17:54:12 +0000 UTC [ - ]
The places where you might want to convert to a 1-number representation are when you have a lot of numbers you want to store or transmit somewhere. Using the stereographic projection (half-angle tangent) instead of angle measure works better with the floating point number format and uses only rational arithmetic instead of transcendental functions.
aj7 2021-08-17 17:02:53 +0000 UTC [ - ]
Now about the phase shift. In 1988, atan2 didn’t exist. Anywhere. Not in FORTRAN, Basic, Excel, or a C library. I’m sure phase shift calculators implemented it, each working alone. For us, it was critical. You see we actually cared not about the phase shift, but its second derivative, the group delay dispersion. This was the beginning of the femtosecond laser era, and people needed to check whether these broadband laser pulses would be inadvertently stretched by reflection off or transmission through the mirror coating. So atan2, the QUADRANT-PRESERVING arc tangent, is required for a continuous,differential phase function. An Excel function macro did this, with IF statements correcting the quadrant. And the irony of all this?
I CALLED it atan2.
dzdt 2021-08-17 19:26:27 +0000 UTC [ - ]
pklausler 2021-08-17 19:48:30 +0000 UTC [ - ]
floxy 2021-08-17 18:03:00 +0000 UTC [ - ]
http://www.bitsavers.org/www.computer.museum.uq.edu.au/pdf/D...
raphlinus 2021-08-17 19:28:39 +0000 UTC [ - ]
floxy 2021-08-17 19:50:23 +0000 UTC [ - ]
https://web.archive.org/web/20161223125339/http://flash-gord...
...but I wonder when it first hit the scene.
kimixa 2021-08-17 20:57:17 +0000 UTC [ - ]
3BSD was released at the end of 1979 according to wikipedia, and that's just the oldest BSD source I could find, so it may have been in even older versions.
Before ANSI/ISO C I don't think there was really a "math.h" to look for - as it may be different on different systems, but as so many things derive from BSD I wouldn't be surprised if it was "de-facto" standardised to what that supported.
Narishma 2021-08-17 21:09:46 +0000 UTC [ - ]
Excel 4 was released in 1992. In 1988 the only version of Excel that would have been available on the Mac would be the very first one.
drej 2021-08-17 13:27:32 +0000 UTC [ - ]
ThePadawan 2021-08-17 13:37:01 +0000 UTC [ - ]
If abs(x) < 0.1, "sin(x)" is approximated really well by "x".
That's it. For small x, just return x.
(Obviously, there is some error involved, but for the speedup gained, it's a very good compromise)
tzs 2021-08-17 14:36:09 +0000 UTC [ - ]
N Error Limit
1 0.167% (1/6%)
2 8.35 x 10^-5% (1/11980%)
3 1.99 x 10^-8% (1/50316042%)
4 2.76 x 10^-12% (1/362275502328%)
Even for |x| as large as 1 the sin(x) = x approximation is within 20%, which is not too bad when you are just doing a rough ballpark calculation, and a two term approximation gets you under 1%. Three terms is under 0.024% (1/43%).For |x| up to Pi/2 (90°) the one term approximation falls apart. The guarantee from the Taylor series is within 65% (in reality it does better, but is still off by 36%). Two terms is guaranteed to be within 8%, three within 0.5%, and four within 0.02%.
Here's a quick and dirty little Python program to compute a table of error bounds for a given x [3].
[1] x - x^3/3! + x^5/5! - x^7/7! + ...
[2] In reality the error will usually be quite a bit below this upper limit. The Taylor series for a given x is a convergent alternating series. That is, it is of the form A0 - A1 + A2 - A3 + ... where all the A's have the same sign, |Ak| is a decreasing sequence past some particular k, and |Ak| has a limit of 0 as k goes to infinity. Such a series converges to some value, and if you approximate that by just taking the first N terms the error cannot exceed the first omitted term as long as N is large enough to take you to the point where the sequence from there on is decreasing. This is the upper bound I'm using above.
MauranKilom 2021-08-17 21:19:33 +0000 UTC [ - ]
The sin(x) = x approximation is actually exact (in terms of doubles) for |x| < 2^-26 = 1.4e-8. See also [1]. This happens to cover 48.6% of all doubles.
Similarly, cos(x) = 1 for |x| < 2^-27 = 7.45e-9 (see [2]).
Finally, sin(double(pi)) tells you the error in double(pi) - that is, how far the double representation of pi is away from the "real", mathematical pi [3].
[0]: https://randomascii.wordpress.com/2014/10/09/intel-underesti...
[1]: https://github.com/lattera/glibc/blob/master/sysdeps/ieee754...
[2]: https://github.com/lattera/glibc/blob/master/sysdeps/ieee754...
[3]: https://randomascii.wordpress.com/2012/02/25/comparing-float...
quietbritishjim 2021-08-17 13:48:23 +0000 UTC [ - ]
RicoElectrico 2021-08-17 14:46:17 +0000 UTC [ - ]
What is more interesting to me is that this can be one of rationales behind using radians.
And that tan(x)~x also holds for small angles, greatly easing estimations of distance to objects of known size (cf. mil-dot reticle).
lapetitejort 2021-08-17 15:18:08 +0000 UTC [ - ]
chriswarbo 2021-08-17 13:47:40 +0000 UTC [ - ]
Whether it's appropriate in a numerical calculation obviously depends on the possible inputs and the acceptable error bars :)
sharikone 2021-08-17 13:45:28 +0000 UTC [ - ]
ThePadawan 2021-08-17 14:05:24 +0000 UTC [ - ]
That the sin(x) approximation still works well for 10^-1 (with an error of ~0.01%) is the really cool thing!
jvz01 2021-08-17 17:00:35 +0000 UTC [ - ]
Bostonian 2021-08-17 13:49:37 +0000 UTC [ - ]
ThePadawan 2021-08-17 13:54:26 +0000 UTC [ - ]
What I really found amazing was that rather than reducing the number of multiplications to 2 (to calculate x^3), you can reduce it to 0, and it would still do surprisingly well.
mooman219 2021-08-17 16:26:37 +0000 UTC [ - ]
https://gist.github.com/mooman219/19b18ff07bb9d609a103ef0cd0...
tantalor 2021-08-17 13:34:07 +0000 UTC [ - ]
bluedino 2021-08-17 13:38:13 +0000 UTC [ - ]
tantalor 2021-08-17 13:48:22 +0000 UTC [ - ]
bruce343434 2021-08-17 13:39:46 +0000 UTC [ - ]
franciscop 2021-08-17 13:43:34 +0000 UTC [ - ]
> The term "memoization" was coined by Donald Michie in 1968[3] and is derived from the Latin word "memorandum" ("to be remembered"), usually truncated as "memo" in American English, and thus carries the meaning of "turning [the results of] a function into something to be remembered". While "memoization" might be confused with "memorization" (because they are etymological cognates), "memoization" has a specialized meaning in computing.
wongarsu 2021-08-17 13:54:04 +0000 UTC [ - ]
As wikipedia outlines, the r was dropped because of the memo. It's derived from the latin word memorandum that does contain the r, just like memory, but apparently it was more meant as an analogy to written memos.
Const-me 2021-08-17 13:46:00 +0000 UTC [ - ]
Based on the code your version is probably much faster. It would be interesting to compare precision still, MS uses 17-degree polynomial there.
pclmulqdq 2021-08-17 14:15:03 +0000 UTC [ - ]
That said, I wonder if a rational approximation might not be so bad on modern machines...
azalemeth 2021-08-17 14:50:14 +0000 UTC [ - ]
jacobolus 2021-08-17 19:28:27 +0000 UTC [ - ]
See also the lecture https://www.youtube.com/watch?v=S1upJPMIFfg
azalemeth 2021-08-17 21:47:25 +0000 UTC [ - ]
Thanks also for the lecture - I'll enjoy. Nick Trefethen is such a good speaker. He taught me in a graduate course in numerical methods -- honestly, I'd watch those lectures on a loop again.
(1) https://www.ncbi.nlm.nih.gov/pmc/articles/PMC5179227/
(2) https://iopscience.iop.org/article/10.1088/0031-9155/51/5/00...
petschge 2021-08-17 15:02:33 +0000 UTC [ - ]
Someone 2021-08-17 18:07:25 +0000 UTC [ - ]
If, as this article does, you compute all your intermediate values as 32-bit IEEE floats, it would hugely surprise me if you still got 24 bits of precision after 1 division (to compute y/x or x/y), 15 multiplications and 14 additions (to evaluate the polynomial) and, sometimes, a subtraction from π/2.
(That’s a weakness in this article. They point out how fast their code is, but do not show any test that it actually computes approximations to atan2, let alone about how accurate their results are)
Getting all the bits right in floating point is expensive.
adgjlsfhk1 2021-08-17 18:23:08 +0000 UTC [ - ]
Const-me 2021-08-17 14:41:01 +0000 UTC [ - ]
Uops.info says the throughput of 32-byte FP32 vector division is 5 cycles on Intel, 3.5 cycles on AMD.
The throughput of 32-byte vector multiplication is 0.5 cycles. The same is for FMA, which is a single instruction computing a*b+c for 3 float vectors.
If the approximation formula only has 1 division where both numerator and denominator are polynomials, and the degree of both polynomials is much lower than 17, the rational version might be not too bad compared to the 17-degree polynomial version.
pclmulqdq 2021-08-17 15:37:41 +0000 UTC [ - ]
janwas 2021-08-17 18:02:32 +0000 UTC [ - ]
I initially used Chebfun to approximate them, then a teammate implemented the same Caratheodory-Fejer method in C. We subsequently convert from Chebyshev basis to monomials.
Blog sounds good :)
stephencanon 2021-08-17 20:57:24 +0000 UTC [ - ]
Which libc, though? I assume glibc, but it's frustrating when people talk about libc as though there were a single implementation. Each vendor supplies their own implementation, libc is just a common interface defined by the C library. There is no "standard version" provided by libc.
In particular, glibc's math functions are not especially fast--Intel's and Apple's math libraries are 4-5x faster for some functions[1], and often more accurate as well, for example (and both vendors provide vectorized implementations). Even within glibc versions, there have been enormous improvements over the last decade or so, and for some functions there are big performance differences depending on whether or not -fno-math-errno is specified. (I would also note that atan2 has a lot of edge cases, and more than half the work in a standards-compliant libc is in getting those edge cases with zeros and infinities right, which this implementation punts on. There's nothing wrong with that, but that's a bigger tradeoff for most users than the small loss of accuracy and important to note.)
So what are we actually comparing against here? Comparing against a clown-shoes baseline makes for eye-popping numbers, but it's not very meaningful.
None of this should really take away from the work presented, by the way--the techniques described here are very useful for people interested in this stuff.
[1] I don't know the current state of atan2f in glibc specifically; it's possible that it's been improved since last I looked at its performance. But the blog post cites "105.98 cycles / element", which would be glacially slow on any semi-recent hardware, which makes me think something is up here.
rostayob 2021-08-17 21:14:48 +0000 UTC [ - ]
You're right, I should have specified -- it is glibc 2.32-48 . This the source specifying how glibc is built: https://github.com/NixOS/nixpkgs/blob/97c5d0cbe76901da0135b0... .
I've amended the article so that it says `glibc` rather than `libc`, and added a sidenote specifying the version.
I link to it statically as indicated in the gist, although I believe that shouldn't matter. Also see https://gist.github.com/bitonic/d0f5a0a44e37d4f0be03d34d47ac... .
Note that the hardware is not particularly recent (Q3 2017), but we tend to rent servers which are not exactly on the bleeding edge, so that was my platform.
floxy 2021-08-17 23:49:31 +0000 UTC [ - ]
https://sourceware.org/git/?p=glibc.git;a=blob;f=sysdeps/iee...
...where the atan2f version ends up calling atan, which doesn't seem as sophisticated:
https://sourceware.org/git/?p=glibc.git;a=blob;f=sysdeps/iee...
stephencanon 2021-08-17 21:21:41 +0000 UTC [ - ]
Without benchmarking, I would expect atan2f to be around 20-30 cycles per element or less with either Intel's or Apple's scalar math library, and proportionally faster for their vector libs.
rostayob 2021-08-17 21:28:58 +0000 UTC [ - ]
By the way, your writing on floating point arithmetic is very informative -- I even cite a message of yours on FMA in the post itself!
rostayob 2021-08-17 21:21:20 +0000 UTC [ - ]
I did want to deal with 0s in either coordinates though, and to preserve NaNs.
You're right that atan2 has a lot of edge cases, which is why it made for an interesting subject. I thought it was neat that the edge cases I did care about fell out fairly naturally from the efficient bit-twiddling.
That said, the versions presented in the article post are _not_ conforming, as I remark repeatedly. The article is more about getting people curious about some topics that I think are neat, more than a guide on how to implement those standard functions correctly.
hdersch 2021-08-17 17:01:31 +0000 UTC [ - ]
prionassembly 2021-08-17 14:12:06 +0000 UTC [ - ]
nimish 2021-08-17 14:52:17 +0000 UTC [ - ]
Personally I prefer chebyshev approximants like chebfun but pade are really good at the cost of a division.
stephencanon 2021-08-17 21:18:38 +0000 UTC [ - ]
jvz01 2021-08-17 16:32:09 +0000 UTC [ - ]
jvz01 2021-08-17 16:57:53 +0000 UTC [ - ]
ghusbands 2021-08-17 16:38:25 +0000 UTC [ - ]
jvz01 2021-08-17 16:41:29 +0000 UTC [ - ]
The coefficients were generated by a package called Sollya, I've used it a few times to develop accurate chebyshev approximations for functions.
Abramowitz & Stegun is another good reference.
touisteur 2021-08-17 19:41:17 +0000 UTC [ - ]
shoo 2021-08-17 21:20:08 +0000 UTC [ - ]
Minero's faster approximate log2, < 1.4% relative error for x in [1/100, 10]. Here's the simple non-sse version:
static inline float
fasterlog2 (float x)
{
union { float f; uint32_t i; } vx = { x };
float y = vx.i;
y *= 1.1920928955078125e-7f;
return y - 126.94269504f;
}
This fastapprox library also includes fast approximations of some other functions that show up in statistical / probabilistic calculations -- gamma, digamma, lambert w function. It is BSD licensed, originally lived in google code, copies of the library live on in github, e.g. https://github.com/etheory/fastapproxIt's also interesting to read through libm. E.g. compare Sun's ~1993 atan2 & atan:
https://github.com/JuliaMath/openlibm/blob/master/src/e_atan...
https://github.com/JuliaMath/openlibm/blob/master/src/s_atan...
zokier 2021-08-17 14:12:32 +0000 UTC [ - ]
float atan_input = (swap ? x : y) / (swap ? y : x);
pklausler 2021-08-17 18:54:31 +0000 UTC [ - ]
drfuchs 2021-08-17 13:37:02 +0000 UTC [ - ]
adrian_b 2021-08-17 13:46:42 +0000 UTC [ - ]
CORDIC, in its basic form, produces just 1 bit of result per clock cycle. Only for very low precisions (i.e. with few bits for each result) you would have the chance to overlap enough parallel atan2 computations to achieve a throughput comparable to what you can do with polynomial approximations on modern CPUs, with pipelined multipliers.
CORDIC remains useful only for ASIC/FPGA implementations, in the cases when the area and the power consumption are much more important than the speed.
mzs 2021-08-17 13:52:02 +0000 UTC [ - ]
sorenjan 2021-08-17 14:55:43 +0000 UTC [ - ]
zbjornson 2021-08-17 15:02:40 +0000 UTC [ - ]
To your last question specifically, if you _mm256_load_ps(&ys[i]) and you're at the edge of a page, you'll get a segfault. Otherwise you'll get undefined values (which can be okay, you could ignore them instead of dealing with a partial load).
twic 2021-08-17 15:46:44 +0000 UTC [ - ]
vlovich123 2021-08-17 15:22:51 +0000 UTC [ - ]
Typically the latter. That's why I find SVEs so interesting. Should improve code density.
TonyTrapp 2021-08-17 15:00:17 +0000 UTC [ - ]
aconz2 2021-08-17 16:16:04 +0000 UTC [ - ]
The baseline is at a huge disadvantage here because the call to atan2 in the loop never gets inlined and the loop doesn't seem to get unrolled (which is surprising actually). Manually unrolling by 8 gives me an 8x speedup. Maybe I'm missing something with the `-static` link but unless they're using musl I didn't think -lm could get statically linked.
xxpor 2021-08-17 16:22:57 +0000 UTC [ - ]
aconz2 2021-08-17 19:44:58 +0000 UTC [ - ]
nice2meetu 2021-08-17 17:04:51 +0000 UTC [ - ]
Part of the motivation was that I could get 10x faster than libc. However, I then tried on my FreeBSD and could only get 4x faster. After a lot of head scratching and puzzling it turned out there was a bug in the version of libc on my linux box that slowed things down. It kind of took the wind out of the achievement, but it was still a great learning experience.
azhenley 2021-08-17 18:54:24 +0000 UTC [ - ]
spaetzleesser 2021-08-17 18:22:18 +0000 UTC [ - ]
This is so much more fun than figuring out why some SAAS service is misbehaving.
cogman10 2021-08-17 18:34:21 +0000 UTC [ - ]
gumby 2021-08-17 14:11:04 +0000 UTC [ - ]
> This is achieved through ...and some cool documents from the 50s.
A bit of anecdote: back when I was a research scientist (corporate lab) 30+ years ago I would in fact go downstairs to the library and read — I was still a kid with a lot to learn (and still am). When I came across (by chance or by someone’s suggestion) something useful to my work I’d photocopy the article and and try it out. I’d put a comment in the code with the reference.
My colleagues in my group would give me (undeserved) credit for my supposed brilliance even though I said in the code where the idea came from and would determinedly point to the very paper on my desk. This attitude seemed bizarre as the group itself was producing conference papers and even books.
(Obviously this was not a universal phenomenon as there were other people in the lab overall, and friends on the net, suggesting papers to read. But I’ve seen it a lot, from back then up to today)
lmilcin 2021-08-17 15:35:12 +0000 UTC [ - ]
As a developer, I regularly am in a situation where I solved some kind of large problem people had for a long time with significant impact for the company and this mostly with couple google searches.
Once I solved a batch task that took 11 hours to complete and reduced it to about 5s of work by looking up a paper on bitemporal indexes. The reactions ranged from "You are awesome!" to "You red a... paper????" to "No, we can't have this, this is super advanced code and it surely must have bugs. Can't you find an open source library that implements this?"
refset 2021-08-17 17:24:54 +0000 UTC [ - ]
Undoubtedly too late for you, but https://github.com/juxt/crux/blob/master/crux-core/src/crux/... :)
jacobolus 2021-08-17 17:16:27 +0000 UTC [ - ]
zokier 2021-08-17 16:57:05 +0000 UTC [ - ]
Its exactly because information is so ubiquitous and plentiful that finding a reasonable solution is like looking for a needle in a haystack.
lmilcin 2021-08-17 17:46:06 +0000 UTC [ - ]
More likely they just went to roll out their own solution for any number of other reasons:
-- For the fun of intellectual challenge (I am guilty of this),
-- For having hubris to think they are first to meet that challenge (I have been guilty of this in the past, I no longer make that mistake),
-- For having hubris to think they have uniquely best solution to the problem, highly unlikely it is even worth verifying,
-- For being superior to everybody else and hence no real reason to dignify other, lesser people, by being interested in their work,
-- For feeling there is just too much pressure and no time to do it right or even to take a peek at what is right and how much time it would take,
-- For not having enough experience to suspect there might be a solution to this problem,
-- Books? Only old people read books,
-- Papers? They are for research scientists. I haven't finished CS to read papers at work.
and so on.
There have really been very few concrete problems that I could not find some solution to on the Internet.
It is more difficult to find solution to less concrete problems but even with these kinds of problems if you have a habit of regularly seeking knowledge, reading various sources, etc. you will probably amass enough experience and knowledge to deal with most even complex problems on your own.
djmips 2021-08-17 19:08:09 +0000 UTC [ - ]
lmilcin 2021-08-17 20:13:40 +0000 UTC [ - ]
People lost ability to read anything longer than a tweet with focus and comprehension. Especially new developers.
I know this because whatever I put in third and further paragraphs of my tickets is not being red or understood.
I sometimes quiz guys on review meetings and I even sometimes put the most important information somewhere in the middle of the ticket. Always with the same result -- they are surprised.
Now, to read a paper is to try to imagine being the person who wrote it and appreciate not just written text but also a lot more things that are not necessarily spelled.
The same with code -- when I read somebodys code I want to recognize their style, how they approach various problems. Once you know a little bit about their vocabulary you can really speed up reading the rest because you can anticipate what is going to happen next.
nuclearnice1 2021-08-17 20:55:49 +0000 UTC [ - ]
They work on the problem. They will make progress and likely solve it.
They search the literature they will spend time on research and quite possibly lose days before emerging with nothing to show for it.
lmilcin 2021-08-18 00:38:35 +0000 UTC [ - ]
I see you prefer lousy results.
Also, you read BEFORE you need it.
To be literate, educated, knowledgeable developer. And to know where the knowledge is when you need it (because you can't remember everything).
nuclearnice1 2021-08-18 03:26:49 +0000 UTC [ - ]
I wasn’t stating a preference merely observing another possible reason people don’t research solutions before diving in.
2021-08-17 19:18:33 +0000 UTC [ - ]
caf 2021-08-18 04:50:09 +0000 UTC [ - ]
If Edison had a needle to find in a haystack, he would proceed at once with the diligence of the bee to examine straw after straw until he found the object of his search. … I was a sorry witness of such doings, knowing that a little theory and calculation would have saved him ninety per cent of his labor.
lmilcin 2021-08-18 11:27:22 +0000 UTC [ - ]
Which, of course, requires that you somehow know of magnets beforehand.
Which is exactly the point -- even if you are in business of finding needles in haystack it pays to have knowledge in other areas, and basically read books and other knowledge sources. You never know when it is going to pay off.
nextaccountic 2021-08-18 08:08:59 +0000 UTC [ - ]
Please tell me this person didn't win and they got to keep your solution.
(my response would be "let me open source this then")
lmilcin 2021-08-18 11:24:07 +0000 UTC [ - ]
As to open sourcing, not going to happen.
Companies treat the code as if it was tangible good. If you give it to somebody else you are loosing value.
Which is super strange, because code needs to be maintained and why not have other people do it for you if they find it valuable and worth maintaining?
teddyh 2021-08-17 14:18:31 +0000 UTC [ - ]
dleslie 2021-08-17 15:18:56 +0000 UTC [ - ]
Personally, I love when I get a chance to apply something from a published paper; I will leave a citation as a comment in the code and a brief explanation of how it works and why it was chosen. I have no regrets for having achieved a computing degree, so many years ago.
whatshisface 2021-08-17 14:18:38 +0000 UTC [ - ]
gumby 2021-08-17 17:50:11 +0000 UTC [ - ]
specialist 2021-08-17 17:05:49 +0000 UTC [ - ]
I truly miss foraging the stacks. Such delight.
Browsing scholar.google.com, resulting in 100s of tabs open, just isn't the same.
jjgreen 2021-08-17 22:48:38 +0000 UTC [ - ]
IntrepidWorm 2021-08-17 23:14:47 +0000 UTC [ - ]
See: https://en.m.wikipedia.org/wiki/Mariko_Aoki_phenomenon
The academic consensus seems understandably mixed, but it is certainly a fascinating ponder (what a wonderful avenue of research)!
I'd wager any significant breakthroughs in this would certainly be up for consideration for an IgNobel.
jjgreen 2021-08-18 08:55:03 +0000 UTC [ - ]
touisteur 2021-08-17 18:47:43 +0000 UTC [ - ]
Recently I rediscovered the ability to compute logs with 64 bits integers with a better precision than double-precision. Man, jumping back into those depths after so many years... Not as easy as reading other people's code.
_moof 2021-08-17 22:31:50 +0000 UTC [ - ]
kbenson 2021-08-17 19:55:59 +0000 UTC [ - ]
We should all strive to stay young at heart like this. ;)