Optimizing Long Lists Of Yes / No Values With JavaScript

Advertisement

Very frequently in Web development (and programming in general), you need to store a long list of boolean values (yes/no, true/false, checked/unchecked… you get the idea) into something that accepts only strings. Maybe it’s because you want to store them in localStorage or in a cookie, or send them through the body of an HTTP request. I’ve needed to do this countless times.

The last time I stumbled on such a case wasn’t with my own code. It was when Christian Heilmann1 showed me his then new slide deck2, with a cool feature where you could toggle the visibility of individual slides in and out of the presentation. On seeing it, I was impressed. Looking more closely, though, I realized that the checkbox states did not persist after the page reloaded. So, someone could spend a long time carefully tweaking their slides, only to accidentally hit F5 or crash their browser, and then — boom! — all their work would be lost. Christian told me that he was already working on storing the checkbox states in localStorage. Then, naturally, we endlessly debated the storage format. That debate inspired me to write this article, to explore the various approaches in depth.

Using An Array

We have two (reasonable) ways to model our data in an array. One is to store true/false values, like so:

[false, true, true, false, false, true, true]

The other is to store an array of 0s and 1s, like so:

[0, 1, 1, 0, 0, 1, 1]

Whichever solution we go with, we will ultimately have to convert it to a string, and then convert it back to an array when it is read. We have two ways to proceed: either with the old Array#join() (or Array#toString()) and String#split(), or with the fancier JSON.stringify() and JSON.parse().

With the JSON way, the code will be somewhat shorter, although it is the JavaScript equivalent of slicing bread with a chainsaw. Not only there is a performance impact in most browsers3, but you’re also cutting down browser support quite a bit.

The main drawback of using array-based strings is their size in bytes. If you go with the number method, you would use almost 2 characters per number (or, more precisely, 2N − 1, since you’d need one delimiter per number, except for the last one):

[0, 1, 1, 0, 0, 1, 1].toString().length // 13, for 7 values

So, for 512 numbers, that would be 1023 characters or 2 KB, since JavaScript uses UTF-164. If you go with the boolean method, it’s even worse:

[false, true, true, false, false, true, true].toString().length // 37, also for 7 values

That’s around 5 to 6 characters per value, so 2560 to 3072 characters for 512 numbers (which is 5 to 6 KB). JSON.stringify() even wastes 2 more characters in each case, for the opening and closing brackets, but its advantage is that you get your original value types back with JSON.parse() instead of strings.

Using A String

Using a string saves some space, because no delimiters are involved. For example, if you go with the number approach and store strings like '01001101010111', you are essentially storing one character per value, which is 100% better than the better of the two previous approaches. You can then get the values into an array by using String#split:

'01001101010111'.split(''); // ['0','1','0','0','1','1','0','1','0','1','0','1','1','1']

Or you could just loop over the string using string.charAt(i) — or even the string indexes (string[i]), if you don’t care about older browsers.

Using Bitfields

Did the previous method make you think of binary numbers? It’s not just you. The concept of bitfields5 is quite popular in other programming languages, but not so much in JavaScript. In a nutshell, bitfields are used to pack a lot of boolean values into the bits of the boolean representation of a number. For example, if you have eight values (true, false, false, true, false, true, true, false), the number would be 10010110 in binary; so, 150 in decimal and 96 in hex. That’s 2 characters instead of 8, so 75% saved. In general, 1 digit in the hex representation corresponds to exactly 4 bits. (That’s because 16 = 24. In general, in a base2n system, you can pack n bits into every base2n digit.) So, we weren’t lucky with that 75%; it’s always that much.

Thus, instead of storing that string as a string and using 1 character per value, we can be smarter and convert it to a (hex) number first. How do we do that? It’s no more than a line of code:

parseInt('10010110', 2).toString(16); // returns '96'

And how do we read it back? That’s just as simple:

parseInt('96', 16).toString(2); // returns  '10010110'

From this point on, we can follow the same process as the previous method to loop over the values and do something useful with them.

Can We Do Better?

In fact, we can! Why convert it to a hex (base 16) number, which uses only 6 of the 26 alphabet letters? The Number#toString() method allows us to go up to base 366 (throwing a RangeError for >= 37), which effectively uses all letters in the alphabet, all the way up to z! This way, we can have a compression of up to 6 characters for 32 values, which means saving up to 81.25% compared to the plain string method! And the code is just as simple:

parseInt( '1001011000', 2).toString(36); // returns 'go' (instead of '258', which would be the hex version)
parseInt('go', 36).toString(2); // returns  '1001011000'

For some of you, this will be enough. But I can almost hear the more inquisitive minds out there shouting, “But we have capital letters, we have other symbols, we are still not using strings to their full potential!” And you’d be right. There is a reason why every time you open a binary file in a text editor, you get weird symbols mixed with numbers, uppercase letters, lowercase letters and whatnot. Every character in an UTF-16 string is a 2 bytes (16 bits), which means that if we use the right compression algorithm, we should be able to store 16 yes/no values in it (saving 93.75% from the string method).

The problem is that JavaScript doesn’t offer a built-in way to do that, so the code becomes a bit more complicated.

Packing 16 Values Into One Character

You can use String.fromCharCode to get the individual characters. It accepts a numerical value of up to 65,535 and returns a character (and for values greater than that, it returns an empty string).

So, we have to split our string into chunks of 16 characters in size. We can do that through .match(/.{1,16}/g)7. To sum up, the full solution would look like this:

function pack(/* string */ values) {
    var chunks = values.match(/.{1,16}/g), packed = '';
    for (var i=0; i < chunks.length; i++) {
        packed += String.fromCharCode(parseInt(chunks[i], 2));
    }
    return packed;
}

function unpack(/* string */ packed) {
    var values = '';
    for (var i=0; i < packed.length; i++) {
        values += packed.charCodeAt(i).toString(2);
    }
    return values;
}

It wasn’t that hard, was it?

With these few lines of code, you can pack the aforementioned 512 values into — drum roll, please — 32 characters (64 bytes)!

Quite an improvement over our original 2 KB (with the array method), isn’t it?

Limitations

Numbers in JavaScript have limits. For the methods discussed here that involve an intermediate state of converting to a number, the limit appears to be 1023 yes/no values, because parseInt('1111…1111', 2) returns Infinity when the number of aces is bigger than 1023. This limit does not apply to the last method, because we’re only converting blocks of bits instead of the whole thing. And, of course, it doesn’t apply to the first two methods (array and string) because they don’t involve packing the values into an integer.

“I Think You Took It A Bit Too Far”

This might be overkill for some cases. But it will definitely come in handy when you want to store a lot of boolean values in any limited space that can only store strings. And no optimization is overkill for things that go through the wire frequently. For example, cookies are sent on every single request, so they should be as tiny as possible. Another use case would be online multiplayer games, for which response times should be lightning-fast, otherwise the games wouldn’t be fun.

And even if this kind of optimization isn’t your thing, I hope you’ve found the thought process and the code involved educational.

(al)

Thanks to Eli Grey8 and Jonas Wagner9 for their advice and corrections

Image on front page created by Ruiwen Chua10.

Footnotes

  1. 1 http://twitter.com/#!/codepo8
  2. 2 http://icant.co.uk/talks/koc2011/
  3. 3 http://jsperf.com/json-vs-split-join-for-simple-arrays
  4. 4 http://es5.github.com/#x4.3.16
  5. 5 http://en.wikipedia.org/wiki/Bit_field
  6. 6 http://en.wikipedia.org/wiki/Base_36
  7. 7 http://stackoverflow.com/questions/7033639/javascript-split-large-string-in-n-size-chunks
  8. 8 http://eligrey.com
  9. 9 http://29a.ch/
  10. 10 http://www.flickr.com/photos/7162499@N02/3260095534/

↑ Back to topShare on Twitter

Lea works as a Developer Advocate for W3C. She has a long-standing passion for open web standards, which she fulfills by researching new ways to use them, blogging, speaking, writing, and coding popular open source projects to help fellow developers. She is a member of the CSS Working Group, which architects the language itself. Lea studied Computer Science in Athens University of Economics and Business, where she co-organized and occasionally lectured a cutting edge Web development course for 4th year undergrads. She is one of the few misfits who love code and design almost equally.

Advertising
  1. 1

    Oh well, Javascript developers, now we came full circle.
    Need to learn bitwise operators again, huh? :-)

    0
  2. 2

    I greatly appreciate this article. If you get a chance check out our website design.

    0
  3. 3

    This article is very well written, and I really enjoyed it but is there a better example of how to use this? Using this on the slide example provides very little utility. In fact, I would not be surprised if all of this packing/unpacking actually reduces the performance.

    Sorry for being a downer, but I feel like this is a perfect example of premature optimization.

    0
    • 4

      The whole point of this packing/unpacking is space efficiency, not time efficiency. In Computer Science there’s frequently a tradeoff between the two.

      I mentioned some examples in the last paragraph. Are you asking for something more specific or you didn’t notice them? :)

      0
      • 5

        Usually if you use the word “optimizing” in terms of web development, and especially on the front-end with Javascript, it has to do with speed. At least that’s how I see it. Make less requests, use less data, compression, etc.

        If you’re just talking packing/unpacking of data in a more efficient way, you’re just refactoring your code in my mind.

        0
        • 6

          Well, if we pack some data and send it, it will reach destination faster :) so we are optimizing thru compression on client side !!!

          GREAT JOB LEA !

          Dave your attituide is being very jelly :) I think.

          0
          • 7

            I think what Dave is trying to say is that if we send a value and that value because of size takes 250ms to transport, and 250ms to parse we have 500ms impact to the user. However if we send a value and that value because of size takes 50ms to transport, and 450ms to parse we still have 500ms impact to the user. What we would want to show from a performance perspective is that there’s some net benefit. Example: we reduce transport time to 50ms and parsing time only rises 100ms, we have a net gain of 100ms each use.

            0
          • 8

            In response to Michael above. Your point is well taken, this isn’t worth the trouble for small lists of booleans and for high bandwidths. But in reality transport is more often than not the bottleneck. I would hardly expect unpacking to take the majority of the time.

            In any case, the question remains as to what array size (for a given bandwidth) would result in a net performance increase.

            0
        • 9

          But, this is the point of the article, Dave! You said it yourself:

          >> Make less requests, use less data, compression, etc.

          The whole point of this (even though it might be a bit extreme), is to reduce the amount of data that needs to be transferred. The packing/unpacking performance (in terms of speed) is not that big of a deal (because it’s unlikely this will need to be called more than once per page load–or less), since it will *greatly* reduce the amount of data that needs to be transferred!

          0
    • 10

      Might be good to ponderate which one is more valuable on each case: CPU or Bandwidth.

      I believe for a web experience, the CPU usage for this transformation is almost unnoticeable in modern browsers, rather then transmit chunks of Cookie data on each resource of a given page.

      For a game, might be good to save CPU for other experiences rather then play with character conversion, and store those values expanded anyway…

      So my conclusion to it is: It depends of your goal.

      0
    • 11

      Premature optimization is exactly that i thought reading this article (which is well written indeed)

      0
  4. 12

    Good article Lea,

    Not mentioning extreme situations like store preferences in a small firmware memory, like a microwave or a TV. Probably they use this approach too. Just guessing, not an expert on this field anyway :)

    0
  5. 14

    Another approach. Onclick pass the checkbox id var to a jquery function. This function determines if it’s checked or not, then passes the value via ajax post, and saves the visability state in mysql. example:

    $(document).ready(function() {
    $(“.galleryDisplay”).click(function(event){
    var gid = $(this).attr(‘rel’);
    if($(this).is(‘:checked’))
    $.post(‘galleryStatus.php’,{‘gid’:gid, ‘visability’:’y’});
    else
    $.post(‘galleryStatus.php’,{‘gid’:gid, ‘visability’:’n’});
    });
    });

    // loop all gallery images and assign ids to rel checkboxes
    <input type="checkbox" name="galleryDisplay" class="galleryDisplay" value="y" rel='’ />

    there are a few bits of php code here that comment system filters out – but you get the general idea

    0
  6. 16

    if the answer isn’t “yes”, then it must be “no”, right? why store answers if you can store indexes of “yes” answers only?

    0
    • 17

      Indexes take up space too, so in most cases, that would take up more space. But it always depends on the use case: If you have way more trues than falses, then your solution would be better.

      0
  7. 18

    Have you got some benchmarks of these solutions? I’m curious… why did you say that we have to convert boolean array to string? Can’t we use it without any conversion?

    0
  8. 20

    Wow, I just wrote *exactly* the same thing a couple of weeks ago. Thanks for the well written article.

    One thing I might contribute to this approach is that if the selection that you are encoding is heavily weighted towards one value, (eg lots of “yes” or lots of “no”), you might want to consider compressing it by assigning bytes to sequences of “0″ or “1″. For example, make a sequence of 15 zeros “00″, 5 zeros “01″, 1 zero “10″ and a one “11″ and a message such as “1000000000000000100000″ will be compressed to “11001101″.

    As you say, none of this is really new, but it’s good to see these things applied in web development. My use case wasn’t too obscure: I wanted to pack a sparse selection of 1000s of possible items into a short URL, without communicating with a server beforehand.

    0
    • 21

      With that solution however, you are doing gzip’s job for it. May be useful if space in local storage is super-limited, but for over-the-wire transfers you’re not really saving anything.

      0
      • 22

        The goal isn’t always to reduce transfer, as I said, in my case it was to pack the information into a URL.

        In such a case it would be absurd to use gzip. Apart from needing to write, test and deploy a gzip javascript library, the overhead of the format would increase the size of the message instead of decreasing it.

        0
    • 23

      As far as the algo is concerned with space complexity, I guess this can be used too.
      As Lea said “The whole point of this packing/unpacking is space efficiency, not time efficiency”, I guess this would even help in saving more space.

      0
  9. 24

    What the fuck man. Bit shifting to maximize space? Stop reinventing the wheel and use gzip

    0
    • 25

      There is no JS native function to gzip, and implementing it with JS can be quite lengthy, which kinda defies the benefit. What I’m suggesting packs the values quite effectively, with minimal JS code.

      0
      • 26

        I think Popsicle mean use server gzip compression.
        Then the developer of the script has nothing else todo.

        0
        • 27

          If you can, sure, do that!
          But when the data is generated client-side, that’s not really an option.

          0
  10. 28

    Loving the technical aspect of this article, even if the example doesn’t have many real-world applications (in my experience at least; I’ve never once needed to store more than a couple of booleans). It’s always good to keep optimisation at the back of your mind though.

    0
  11. 29

    One thing to note though, ASCII is only 7 bits and not really 8 bits. This is historic dating back to the days when 8 bits included a parity bit. Once the parity bit was moved out of the byte, implementations decided to extend the character set. Anything over 0x7F (ie, 0×10-0xFF) is implementation specific when dealing with single byte characters.

    For JavaScript though, all characters are unicode, and the most common encoding used for those unicode characters is utf8. The first 128 characters (0-127) are exactly the same as ASCII, however any byte with the MSB set to 1 indicates a multi-byte character, so needs to have a corresponding second, third or fourth byte.

    Byte codes 128-191 are continuation bytes, so should not appear on their own, 192, 193 & 245-255 are invalid and should never show up in a valid utf8 character string. 194-244 are valid utf8 characters, however 0xEF, 0xBB and 0xBF are used as byte order marks.

    In general, fromCharCode(x) with x > 128 is probably not a good idea, since behaviour is unpredictable.

    0
    • 30

      @Philip Tellis
      for javascript, as far as i know, utf-16 without surrogate pairs is used

      0
    • 31

      Strings in JavaScript are sequences of 16-bit values, which can be (and usually are) interpreted as UTF-16. But there’s no actual restriction on the use of those 16 bits. See ECMAScript standard at http://www.ecma-international.org/publications/files/ECMA-ST/Ecma-262.pdf

      “The String type is the set of all finite ordered sequences of zero or more 16-bit unsigned integer values [...]. The String type is generally used to represent textual data [...], in which case each element in the String is treated as a code unit value [...]. When a String contains actual textual data, each element is considered to be a single UTF-16 code unit. [...] All operations on Strings (except as otherwise stated) treat them as sequences of undifferentiated 16-bit unsigned integers; they do not ensure the resulting String is in normalised form, nor do they ensure language-sensitive results.”

      0
  12. 32

    Hello Sir,

    the example you have given is not working as expected

    var atob_change = atob(parseInt(’100001′,2));
    document.write(atob_change);
    var orig = parseInt(btoa(atob_change)).toString(2);
    document.write(‘,Original:’+orig); // returns 11

    Here is fiddle:

    http://jsfiddle.net/xkeshav/rXpEk/5/

    please describe me why??

    0
  13. 33

    Great article Lea, I see where this can come in handy. It just worries me that JS is starting to do things that more lower level languages do. It just doesn’t feel DRY.

    0
  14. 34

    Internally, Javascript treats Strings as UTF-16, that means two bytes for every character, no matter what code point it is (contrary to the variable length in UTF-8). The question now is, how do browsers store the data of localStorage? If (which I doubt) browsers store it in UTF-16, we have two bytes for 256 values, thus wasting valuable space.

    So what happens if we use all 65536 possible character values? If the browser stores it in UTF-16, we have two bytes. If the browser stores it in UTF-8 we have one, two or three bytes, with an expectation of ~2.967.

    When using only 256 values (and therefore two characters), we have 4 bytes in UTF-16 and 2-4 bytes in UTF-8, with an expectation of 3.

    In conclusion, if we have more than 128 values, and don’t know about the internals of the browser, we should… well still look at how many values we have… Sigh. Or stop these micro-optimizations.

    0
    • 35

      I updated the article a while ago to reflect UTF-16 values, since that’s what the spec commands and what browsers use. I guess you had it open in the browser for a few hours before commenting?

      0
      • 36

        Read it in my feed-reader…

        Yet my point was meant to be: Does using UTF-16 internally mean, browsers actually store the localStorage data in UTF-16 or maybe in UTF-8 (or more likely in some binary database), and how that might affect the selection of a proper base. I know, it’s completely academical speculation. I like academical speculation :-)

        0
      • 37

        try to change the View>Character Encoding, the data may change.

        0
    • 38

      For localStorage on Chromium/Chrome, strings seems to be encoded in UTF16.

      With a hex editor, open one the files stored into :
      AppDataLocalChromiumUser DataDefaultLocal Storage

      You should see some strings encoded in UTF-16.

      0
    • 39

      Since the data in localStorage likely never is read by anything else than Javascript, it would seem a bit silly to convert it to another encoding.

      0
  15. 40

    Great article, thxs a lot

    0
  16. 41

    Even from a Designer’s view I really enjoyed the thinking behind this Article. Thanks!

    0
  17. 42

    Pro:
    Good for transfer many Data, but in that case server compression like gzip is used.

    Contra:
    Unreadably. The Code is hard to read and take longer then watch a simple array.

    0
  18. 43

    Not every character in an UTF-16 string is 2 bytes (16 bits).
    UTF-16 has the same “variable length” principle as UTF-8. The only difference between UTF-8 und UTF-16 is , that in the (minimum) 16 bits of UTF-16 more characters fit without using the UTF-extension-mechanism, than in the 8 bits of UTF-8.

    0
    • 44

      Nope, that’s false. From http://en.wikipedia.org/wiki/UTF-16 :
      “UTF-16 and UCS-2 produce a sequence of 16-bit code units. Each unit thus takes two 8-bit bytes”

      0
      • 45

        No, each *unit* takes two bytes. Characters can take one or two units.

        Strings are still defined as sequences of 16-bit numbers and the charCode methods treat them as such so it doesn’t really matter in this context, but Strings can definitely contain characters that take up more than two bytes: http://jsfiddle.net/hD4Pc/

        0
  19. 46

    One thing you didn’t cover is that parseInt on a string of 1′s and 0′s is equivalent to iterating through the array and adding powers of 2. So I was going to leave a comment to say the other way’s probably better, but did a quick test first. Anyway, it turns out (surprisingly to me) that creating then parsing the string is significantly faster than the arithmetical way: http://jsfiddle.net/wheresrhys/zHB8N/. I wonder if this would still hold when you get into the higher bases of your later examples?

    0
    • 47

      @wheresrhys:
      I checked yours fiddle in Iceweasel (kind of a Firefox) and Chrome, and those two browsers had different results.
      In Chrome viaLoop is faster, in Iceweasel – viaString.
      [I've got 4x bigger array for this test.]

      Lea:
      Article is nice, but… If my browser will be able to display 10 000 checkboxes, storing their values shouldn’t be a problem.

      0
    • 48

      For performance tests, better use jsperf.com. Then you get BrowserScope results from multiple browsers after a while.

      0
  20. 49

    Hi Lea,

    That’s one of the best articles I’ve read about JS in a while. I think that after you learn a lot about any programming language these nuances start bugging you. Which is quicker array_shift() or array_slice() in PHP and so on.

    Thanks for the article!

    0
  21. 50

    Hello,

    while this may seem like a nice idea, you have to look out for too long strings.
    Because JavaScript does not have Integers, the Number type is actually the IEEE754 implementation of double precision floating point numbers.

    So you might encounter rounding issues:

    var number1 = 1111111111111111111100000;
    var number2 = 1111111111111111111199999;
    console.log(number1 == number2)

    => reports “true”

    So watch out for such bugs, which are very hard to detect.

    0
    • 51

      Very good point, upvoted. Although it doesn’t apply to the last method, only to the ones that involve such a big number as an intermediate step.

      0
    • 52

      this is true only for numbers not represented in base 2. As long as you do base2->string->base2 conversions, it’s lossless (provided you don’t overflow)

      0
      • 53

        All numbers are represented in base 2 internally (the actual output is a string representation of the number, but you won’t change the internal number structure).

        Rounding issues will occur, regardless of the actual number base. Overflow is one issue, but also the various accuracy issues with floating point numbers.

        The mantissa of a floating point number has a fixed length, Wikipedia says, that with a 64bit floating point number, the mantissa has a length of 52 bits.

        So if you have a binary number in the format 1…1 with more than 52 “1″s the tailing “1″s are rounded.
        Note, that (2^53)-1 ~= 9e15 is much smaller than the actual largest floating point number ~ 1,789e308.

        The actual problem is not the overflow but the small precision, due to the “small” mantissa (so yes, it is an overflow of the mantissa actually).

        0
  22. 54

    There is a one important thing here!
    If your bit-string starts with “0″, then when packing and unpacking, the string will be not the same and it will have no leading zeros. So, you must always start the bit-string with extra “1″ and when unpacking, chop the extra “1″ off.
    I’ve used this method earlyer and the performance is very good. :)

    0
    • 55

      Very good point.

      0
      • 56

        Exactly; I think Taai’s remark should definitely be mentioned in article – Without this knowledge one would be quite surprised getting false for ’01′==unpack(pack(’01′))

        0
  23. 57

    This article is really great. I didn’t know these tricks with parseInt and toString.
    Thanks a lot!

    0
  24. 58

    A maybe better splitting the text into chunks would be:

    var l = values.length, lc = 0, out = [], c = 0;
    for (; lc < l; c++) {
    out[c] = values.slice(lc, lc +=2);
    }

    Although it performs slower in IE8 and earlier, it flies in current browsers.

    0
  25. 59

    I agree with Jeff Dickey.

    This article covers some good concepts, most of which are outdated. We worry less about storage as it gets cheaper and cheaper, and more about power consumption/processing power.

    Store the data as it is meant to be stored, and let the javascript engine deal with it, you will run into less problems, and find it way easier to deal with.

    0
  26. 60

    Converting the string to a hex or a base-32 number would be enough— after all, it makes no sense to save some bytes on the values only to waste them on the code required to accomplish that. ;)

    0
  27. 61

    Excellent article and something I will be able to put into use in my work immediately. Need to do some thinking on my own about how this technique might be used if the values being stored aren’t binary, but range for example from 0-4.

    0
  28. 62

    this is really an awesome post ……thanks for sharing

    0
  29. 63

    Great post Lea, as usual ! Thanks a lot :)

    I have a question, I didn’t read all the comments so sorry if you’ve already addressed that point: what happens with the Bitfields technique when the value begins with a zero ? Look at this example:

    var value = ’01′;
    // pack the value
    var a = parseInt( value, 2 ).toString( 36 ); // a=”1″

    // unpack it
    var b = parseInt( a, 36 ).toString( 2 ); // b=”1″

    // original value and unpacked one are not the same :(
    console.log( b === value ); // returns false

    How one could address that problem with the Bitfields technique ?

    Cheers,

    Pomeh

    0
  30. 64

    Beware of zero chars in strings. Some browsers treat them as end-of-string markers (I fell foul of this some years ago when passing GIFs as literals). Most recent browsers correctly get string lengths, but it is an issue if you want cross-browser compatibility. Rather than doing String.fromCharCode(n), you can do ‘abc…xyzABC…XYZ0..9(etc)’.charAt(n) to get a target string that only has ‘safe’ characters in it.

    0
  31. 65

    reminds me of the days hacking 6502 assembly Code Info my commodore 64, optimizing to the Last Bit (and CPU cycle). i wish we would Start caring more about These things nowadays, especially when it comes to data transfer. this article points us in the right direction and is a nice read while the actual use case might be a Bit awkward….

    0

↑ Back to top