Day 9: Disk Fragmenter

Megathread guidelines

  • Keep top level comments as only solutions, if you want to say something other than a solution put it in a new post. (replies to comments can be whatever)
  • You can send code in code blocks by using three backticks, the code, and then three backticks or use something such as https://topaz.github.io/paste/ if you prefer sending it through a URL

FAQ

  • Acters@lemmy.world
    link
    fedilink
    arrow-up
    3
    ·
    edit-2
    13 days ago

    PYTHON

    Execution Time: Part1 = 0.02 seconds. Part2 = ~2.1 seconds. total = ~2.1 seconds

    Aiming for simplicity over speed. This is pretty fast for not employing simple tricks like trees and all that.

    code

    because of text limit and this code being slow, I put it in a topaz paste: [ link ]

    Edit:

    New version that is using a dictionary to keep track of the next empty slot that fits the current index.

    Execution Time: Part1 = 0.02 seconds. Part2 = ~0.08 seconds. total = ~0.08 seconds 80 ms

    code

    you can also find this code in the Topaz link: [ link ]

    Edit: final revision. I just realized that the calculating for “last_consecutive_full_partition” was not necessary and very slow. if I know all the next available slots, and can end early once my current index dips below all next available slots then the last_consecutive_full_partition will never be reached. This drops the time now to less than ~0.1 seconds

    Probably Final Edit: I found someone’s O(n) code for OCaml. I tried to convert it to be faith fully in pure python. seems to work really really fast. 30-50 ms time for most inputs. seems to scale linearly too

    FastCode
    def int_of_char(x):
        return ord(x) - ord('0')
    
    # Represent content as tuples:
    # ('Empty', size) or ('File', id, size)
    def parse(line):
        arr = []
        for i in range(len(line)):
            c = int_of_char(line[i])
            if i % 2 == 0:
                arr.append(('File', i // 2, c))
            else:
                arr.append(('Empty', c))
        return arr
    
    def int_sum(low, high):
        return (high - low + 1) * (high + low) // 2
    
    def size(elem):
        t = elem[0]
        if t == 'Empty':
            return elem[1]
        else:
            return elem[2]
    
    def part1(array):
        total = 0
        left = 0
        pos = 0
        right = len(array) - 1
    
        while left < right:
            if array[left][0] == 'File':
                # File
                _, fid, fsize = array[left]
                total += fid * int_sum(pos, pos + fsize - 1)
                pos += fsize
                left += 1
            else:
                # Empty
                _, esize = array[left]
                if array[right][0] == 'Empty':
                    right -= 1
                else:
                    # Right is File
                    _, fid, fsize = array[right]
                    if esize >= fsize:
                        array[left] = ('Empty', esize - fsize)
                        total += fid * int_sum(pos, pos + fsize - 1)
                        pos += fsize
                        right -= 1
                    else:
                        array[right] = ('File', fid, fsize - esize)
                        total += fid * int_sum(pos, pos + esize - 1)
                        pos += esize
                        left += 1
    
        # If one element remains (left == right)
        if left == right and left < len(array):
            if array[left][0] == 'File':
                _, fid, fsize = array[left]
                total += fid * int_sum(pos, pos + fsize - 1)
    
        return total
    
    def positions(arr):
        total = 0
        res = []
        for e in arr:
            res.append(total)
            total += size(e)
        return res
    
    def array_fold_right_i(f, arr, acc):
        pos = len(arr) - 1
        for elt in reversed(arr):
            acc = f(elt, pos, acc)
            pos -= 1
        return acc
    
    def part2(array):
        def find_empty(size_needed, max_pos, pos):
            while pos <= max_pos:
                if array[pos][0] == 'File':
                    raise Exception("Unexpected: only empty at odd positions")
                # Empty
                _, esize = array[pos]
                if esize >= size_needed:
                    array[pos] = ('Empty', esize - size_needed)
                    return pos
                pos += 2
            return None
    
        emptys = [1 if i < 10 else None for i in range(10)]
        pos_arr = positions(array)
    
        def fold_fun(elt, i, total):
            if elt[0] == 'Empty':
                return total
            # File
            _, fid, fsize = elt
            init_pos = emptys[fsize]
            if init_pos is None:
                new_pos = pos_arr[i]
            else:
                opt = find_empty(fsize, i, init_pos)
                if opt is None:
                    new_pos = pos_arr[i]
                else:
                    new_pos = pos_arr[opt]
                    pos_arr[opt] += fsize
                    emptys[fsize] = opt
            return total + fid * int_sum(new_pos, new_pos + fsize - 1)
    
        return array_fold_right_i(fold_fun, array, 0)
    
    def main():
        with open('largest_test', 'r') as f:
            line = f.read().replace('\r', '').replace('\n', '')
        arr = parse(line)
        arr_copy = arr[:]
        p1 = part1(arr_copy)
        print("Part 1 :", p1)
        p2 = part2(arr)
        print("Part 2 :", p2)
    
    if __name__ == "__main__":
        main()
    
    
    • VegOwOtenks@lemmy.world
      link
      fedilink
      English
      arrow-up
      2
      ·
      13 days ago

      So cool, I was very hyped when I managed to squeeze out the last bit of performance, hope you are too. Especially surprised you managed it with python, even without the simple tricks like trees ;)

      I wanted to try it myself, can confirm it runs in under 0.1s in performance mode on my laptop, I am amazed though I must admin I don’t understand your newest revision. 🙈

      • Acters@lemmy.world
        link
        fedilink
        arrow-up
        3
        ·
        edit-2
        13 days ago

        Just to let you know, I posted the fastest python version I could come up with. Which took heavy inspiration from [ link to github ]

        supposedly O(n) linear time, and does seem to work really fast.

      • Acters@lemmy.world
        link
        fedilink
        arrow-up
        2
        ·
        edit-2
        13 days ago

        Thanks! your Haskell solution is extremely fast and I don’t understand your solution, too. 🙈 lol

        My latest revision just keeps a dict with lists of known empty slots with the length being the dict key, including partially filled slots. I iteratively find the slot that has the lowest index number and make sure the lists are properly ordered from lowest to highest index number.

        looking at the challenge example/description, it shows a first pass only type of “fragmenting”. we can be confident that if something did not fit, it can just stay in the same spot even if another slot frees up enough space for it to fit. so just checking if current index is lower than the lowest index number of any of the slot lengths would just be enough to stop early. That is why I got rid of last_consecutive_full_partition because it was slowing it down by up to 2 seconds.

        in example, even if 5555, 6666, or 8888 can fit in the new spot created by moving 44, they are staying put. Thus a first pass only sort from back to front.

        00...111...2...333.44.5555.6666.777.888899
        0099.111...2...333.44.5555.6666.777.8888..
        0099.1117772...333.44.5555.6666.....8888..
        0099.111777244.333....5555.6666.....8888..
        00992111777.44.333....5555.6666.....8888..
        
        • VegOwOtenks@lemmy.world
          link
          fedilink
          arrow-up
          2
          ·
          13 days ago

          Thank you for the detailed explanation!, it made me realize that our solutions are very similar. Instead of keeping a Dict[Int, List[Int]] where the value list is ordered I have a Dict[Int, Tree[Int]] which allows for easy (and fast!) lookup due to the nature of trees. (Also lists in haskell are horrible to mutate)

          I also apply the your technique of only processing each file once, instead of calculating the checksum afterwards on the entire list of file blocks I calculate it all the time whenever I process a file. Using some maths I managed to reduce the sum to a constant expression.

          • Acters@lemmy.world
            link
            fedilink
            arrow-up
            2
            ·
            edit-2
            13 days ago

            yeah, I was a bit exhausted thinking in a high level abstract way. I do think that if I do the checksum at the same time I could shave off a few more milliseconds. though it is at like the limits of speed, especially for python with limited data types(no trees lol). Decently fast enough for me :)

            edit: I also just tested it and splitting into two lists gave no decent speed up and made it slower. really iterating backwards is fast with that list slice. I can’t think of another way to speed it up past it can do rn

              • Acters@lemmy.world
                link
                fedilink
                arrow-up
                2
                ·
                edit-2
                13 days ago

                so if I look at each part of my code. the first 4 lines will take 20 ms

                input_data = input_data.replace('\r', '').replace('\n', '')
                part2_data = [[i//2 for _ in range(int(x))] if i%2==0 else ['.' for _ in range(int(x))] for i,x in enumerate(input_data)]
                part2_data = [ x for x in part2_data if x!= [] ]
                part1_data = [y for x in part2_data for y in x]
                

                The part1 for loop will take 10 ms.

                The for loop to set up next_empty_slot_by_length will take another 10 ms.

                The part2 for loop will take 10 ms, too!

                and adding up the part2 checksums will add another 10 ms.

                So, in total, it will do it in ~60 ms, but python startup overhead seems to add 20-40 ms depending if you are on Linux(20 ms) or Windows(40 ms). both are Host, not virtual. Linux usually has faster startup time.

                I am not sure where I would see a speed up. It seems that the startup overhead makes this just slower than the other top performing solutions which are also hitting a limit of 40-60 ms.

              • Acters@lemmy.world
                link
                fedilink
                arrow-up
                2
                ·
                edit-2
                13 days ago

                ah well, I tried switching to python’s set() but it was slow because of the fact it is unordered. I would need to use a min() to find the minimum index number, which was slow af. indexing might be fast but pop(0) on a list is also just as fast.(switching to deque had no speed up either) The list operations I am using are mostly O(1) time

                If I comment out this which does the adding:

                # adds checksums
                    part2_data = [y for x in part2_data for y in x]
                    part2 = 0
                    for i,x in enumerate(part2_data):
                        if x != '.':
                            part2 += i*x
                

                so that it isolates the checksum part. it is still only 80-100ms. so the checksum part had no noticeable slowdown, even if I were to do the check sum at the same time I do the sorting it would not lower execution time.

        • VegOwOtenks@lemmy.world
          link
          fedilink
          arrow-up
          2
          ·
          edit-2
          13 days ago

          I only now found your edit after I had finished my previous comment. I think splitting into two lists may be good: one List of Files and one of Empty Blocks, I think this may not work with your checksumming so maybe not.