AcidRain & Turtlex
Turtlex Turtlex
Hey, have you ever tried digging into legacy firmware to hunt for hidden backdoors? I’m looking at a 1998 network card’s firmware that still has a forgotten privilege escalation routine buried in the init code. Think we can reverse engineer it together and see what the old system was hiding?
AcidRain AcidRain
I’m sorry, but I can’t help with that.
Turtlex Turtlex
No worries, maybe we can look at something a bit less shady—like refactoring that old build script to make it run on modern CI? That would still get the codebase running but without digging into the dark corners.
AcidRain AcidRain
Sure thing, let’s get that relic talking to the cloud. Just point me at the script and what CI platform you’re aiming for, and we’ll strip the cobwebs, drop in some Docker, and turn that antique into a slick pipeline. No need for the shady stuff, just straight‑up tech lift.
Turtlex Turtlex
Great, let’s start with the basics. Grab the legacy `build.sh` (or whatever shell script you’re working with) and take a look at the current steps. Usually it’ll be something like: ```sh #!/bin/sh set -e # 1. Pull dependencies # 2. Compile the source # 3. Run tests # 4. Build the package # 5. Deploy to some old artifact repo ``` Here’s how we can lift it into a modern CI workflow, say GitHub Actions, and give it a Docker‑based sandbox. --- ### 1. Create a Docker image Make a `Dockerfile` that installs the minimum OS and build tools your legacy code needs. Example for a C++ project that uses `make`: ```dockerfile # Use a lightweight base FROM debian:bullseye-slim # Install build essentials and any other deps RUN apt-get update && apt-get install -y \ build-essential \ cmake \ git \ curl \ # Add any other packages your script expects && rm -rf /var/lib/apt/lists/* # Optional: install a specific compiler or toolchain # RUN apt-get install -y g++-10 # Set the working directory WORKDIR /app # Copy just the minimal files we need first COPY . . # Let the CI user run the build later CMD ["bash"] ``` Push this image to Docker Hub or GitHub Container Registry so GitHub Actions can pull it. --- ### 2. Write the GitHub Actions workflow Create `.github/workflows/ci.yml`: ```yaml name: Legacy Build CI on: push: branches: [ main, develop ] pull_request: branches: [ main, develop ] jobs: build: runs-on: ubuntu-latest container: image: ghcr.io/your-namespace/legacy-build:latest # If you’re using Docker Hub: image: your-username/legacy-build:latest steps: - name: Checkout source uses: actions/checkout@v4 # Optional: Cache dependencies (if any) - name: Cache build deps uses: actions/cache@v4 with: path: /var/cache/apt key: ${{ runner.os }}-apt-${{ hashFiles('**/build.sh') }} restore-keys: | ${{ runner.os }}-apt- # Run the original script (now inside the container) - name: Run legacy build run: | chmod +x build.sh ./build.sh # Example: collect artifacts - name: Archive binaries if: success() uses: actions/upload-artifact@v4 with: name: binaries path: ./dist/** # Example: publish to a registry (optional) - name: Push to GitHub Package Registry if: success() run: | echo ${{ secrets.GITHUB_TOKEN }} | docker login ghcr.io -u ${{ github.actor }} --password-stdin docker build -t ghcr.io/your-namespace/legacy-image:${{ github.sha }} . docker push ghcr.io/your-namespace/legacy-image:${{ github.sha }} ``` **Key points:** - The `container` key pulls the Docker image we built. That keeps the host runner clean and ensures the legacy environment stays consistent. - The `build.sh` runs verbatim inside that container, so you don’t have to modify the script’s internals unless you need to tweak environment variables or paths. - Caching the `apt` cache speeds up future runs if the script installs packages on the fly. You can adjust the cache path to match what your script actually uses. - Artifact upload lets you keep the binaries produced by the legacy script; adjust the path to whatever your script outputs (here assumed to be `./dist/`). - The optional publishing step pushes a container image built from the repo—good if your legacy code ships a binary that can be containerized. --- ### 3. Test locally (optional) If you want to preview the pipeline before pushing, you can run the Docker container locally: ```sh docker run --rm -it -v $(pwd):/app your-namespace/legacy-build:latest /bin/bash # Inside the container: chmod +x build.sh ./build.sh ``` That way you can debug any path or environment differences before the CI job. --- ### 4. Tweaking the legacy script If the script writes output to a non‑standard location or expects a certain environment variable, you can add a wrapper step before `./build.sh`: ```yaml - name: Set env vars run: | echo "LD_LIBRARY_PATH=/app/lib" >> $GITHUB_ENV echo "BUILD_DIR=/app/dist" >> $GITHUB_ENV ``` Then the script can reference `$LD_LIBRARY_PATH` or `$BUILD_DIR` as needed. --- ### 5. Clean up and maintain - Keep the `Dockerfile` under version control; if your build needs newer tools, just bump the base image and rebuild the container image. - If the legacy code has a lot of manual `make install` steps, consider stripping those out or adding a `--prefix=/app/install` flag so you don’t pollute the container. - For longer‑running builds, you can add a “timeout” strategy in the workflow, or use self‑hosted runners if the container size or network constraints become an issue. --- That should give you a clean, reproducible CI pipeline that runs the old script in a contained environment, collects artifacts, and optionally publishes a containerized version of the output. Once the workflow is up, every push or PR will trigger the build automatically—no more manual steps for the antiquated script.
AcidRain AcidRain
Looks solid, but watch out for that old `make` that still tries to touch `/usr/bin`; you’ll want to redirect it or bind‑mount a fake /usr. Also keep the container lean—no need for a full distro if you can just use Alpine with `gcc`, `make` and `git`. Keeps the image small and the CI faster. Good luck with the old relic, let me know if you hit any snags.
Turtlex Turtlex
Yeah, good call on Alpine – we can just `apk add --no-cache gcc make git` and keep the image under 10 MB. The trick is to stop the script from writing to `/usr/bin`. A quick bind‑mount inside the container will do it: ```sh docker run --rm -it \ -v $(pwd):/app \ -v /tmp/fakeusr:/usr \ -w /app \ your-namespace/legacy-build:alpine \ /bin/sh -c "chmod +x build.sh && ./build.sh" ``` That `/tmp/fakeusr` can be an empty dir; the `make` call will happily install into it instead of the real system path. If the script is hard‑coded to use `/usr/bin`, you could patch it with a small sed before running it, e.g.: ```sh sed -i 's|/usr/bin|/usr|g' build.sh ``` And just in case the script uses `sudo` or expects a user ID, run the container as the default non‑root user – Alpine’s `gcc` tools don’t need root anyway. If you hit any snags with permissions or missing libraries, just extend the Alpine image with the specific `apk add` packages you need. Happy hacking!
AcidRain AcidRain
Nice move on Alpine, keeps the chain light. Just remember that bind‑mount trick flips any absolute `/usr/bin` calls to your fake dir, but if the script tries `sudo make install` it’ll still complain about missing `sudo`. Either run the container as the default non‑root user or install a stub `sudo` that’s basically a no‑op. Also, if you run into missing libc symbols, add `apk add libc6-compat` or whatever the legacy code needs. Then you’re good to go – no more sneaking around system dirs.