Day 18: Ram Run
Megathread guidelines
- Keep top level comments as only solutions, if you want to say something other than a solution put it in a new post. (replies to comments can be whatever)
- You can send code in code blocks by using three backticks, the code, and then three backticks or use something such as https://topaz.github.io/paste/ if you prefer sending it through a URL
FAQ
- What is this?: Here is a post with a large amount of details: https://programming.dev/post/6637268
- Where do I participate?: https://adventofcode.com/
- Is there a leaderboard for the community?: We have a programming.dev leaderboard with the info on how to join in this post: https://programming.dev/post/6631465
Part 2 can be faster if you iteratively remove blocks until there is a path. This is because it is faster to fail to find a path and the flood fill algorithm does not need to fill as many spots because the map would be filled up with more blocks! this drops the part 2 solve to a few milliseconds. others have taken a binary search option which is faster.
Thanks, that’s exactly the sort of insight that I was too tired to have at that point 😅
The other thing I had to change was to make it recursive rather than iterating over the full grid - the latter is fast for large update, but very wasteful for local updates, like removing the points. Virtually instant now!
Code
#include "common.h"
#define SAMPLE 0
#define PTZ 3600
#define GZ (SAMPLE ? 9 : 73)
#define P1STEP (SAMPLE ? 12 : 1024)
#define CORR -1
static int g[GZ][GZ];
static void
flood(int x, int y)
{
int lo=INT_MAX;
if (x <= 0 || x >= GZ-1 ||
y <= 0 || y >= GZ-1 || g[y][x] == CORR)
return;
if (g[y-1][x] > 0) lo = MIN(lo, g[y-1][x] +1);
if (g[y+1][x] > 0) lo = MIN(lo, g[y+1][x] +1);
if (g[y][x-1] > 0) lo = MIN(lo, g[y][x-1] +1);
if (g[y][x+1] > 0) lo = MIN(lo, g[y][x+1] +1);
if (lo != INT_MAX && (!g[y][x] || g[y][x] > lo)) {
g[y][x] = lo;
flood(x, y-1);
flood(x, y+1);
flood(x-1, y);
flood(x+1, y);
}
}
int
main(int argc, char **argv)
{
static int xs[PTZ], ys[PTZ];
static char p2[32];
int p1=0, npt=0, i;
if (argc > 1)
DISCARD(freopen(argv[1], "r", stdin));
for (i=0; i<GZ; i++)
g[0][i] = g[GZ-1][i] =
g[i][0] = g[i][GZ-1] = CORR;
for (npt=0; npt<PTZ && scanf(" %d,%d", xs+npt, ys+npt)==2; npt++) {
assert(xs[npt] >= 0); assert(xs[npt] < GZ-2);
assert(ys[npt] >= 0); assert(ys[npt] < GZ-2);
}
assert(npt < PTZ);
for (i=0; i<npt; i++)
g[ys[i]+1][xs[i]+1] = CORR;
g[1][1] = 1;
flood(2, 1);
flood(1, 2);
for (i=npt-1; i >= P1STEP; i--) {
g[ys[i]+1][xs[i]+1] = 0;
flood(xs[i]+1, ys[i]+1);
if (!p2[0] && g[GZ-2][GZ-2] > 0)
snprintf(p2, sizeof(p2), "%d,%d", xs[i],ys[i]);
}
p1 = g[GZ-2][GZ-2]-1;
printf("18: %d %s\n", p1, p2);
return 0;
}
Wooo! instant is so good, I knew you could do it! When I see my python script getting close to 20 ms, I usually expect my fellow optimized language peers to be doing it faster. Pretty surprised to see so many varying solutions that ended up being a little slower just because people didnt realize the potential of speed from failing to find a path.
The first part has a guaranteed path! if you think about a binary search, when there is a path then the block is higher up the list, so we ignore the lower blocks in the list. move to the next “midpoint” to test and just fill and remove blocks as we go to each mid point. So I took the first part as the lower point and moved to a mid point above that.
at least that is how I saw it, when I first looked, but binary search is a little harder to think of than just a simple for loop from the end of the list back. Yet I still got it done! Even included a dead end filler that takes 7 ms to show the final path for Part 2, it was not needed but was a neat inclusion!
Awesome! I understood the idea behind the binary search but thought it wasn’t a good fit for the flood fill. As opposed to something like A* it will give you reachability and cost for every cell (at a cost), but that’s no use when you do repeated searches that are only meant to find a single path. So I was very happy with your suggestion, it fits better with the strengths.
“Virtually instant” btw is measured 0.00 by time
. I like it when things are fast but I also prefer simper approaches (that is: loops and arrays) over the really optimized fast stuff. People do really amazing things but the really clever algorithms lean on optimized generic data structures that C lacks. It’s fun though to see how far you can drive loops and arrays! Perhaps next year I’ll pick a compiled language with a rich data structure library and really focus on effectively applying good algorithms and appropriate data structures.
Btw how do you measure performance? I see a lot of people including timing things in their programs but I can’t be bothered. Some people also exclude parsing - which wouldn’t work for me because I try to process the input immediately, if possible.