Managing Multiple Repositories for a Single Project: A Practical Guide for Solo Developers

When your project grows beyond a simple side experiment into a real product — with a separate frontend, backend, and AI service — you quickly face a structural question: should everything live in one repository, or should each service have its own?

This guide is what I wish I had when I made that decision. It covers the why behind splitting repos, the different deployment strategies available, daily Git workflows, and the common mistakes that will cost you hours if you don’t know about them in advance.


Why Split Into Multiple Repositories?

The honest answer is: you don’t always have to. A monorepo works fine for many projects. But when your stack involves genuinely separate services — say, a Next.js frontend, a Go or Node.js backend, and a Python AI service — the separation starts to earn its keep.

Here’s how the two approaches compare:

ConcernMonorepoMultiple Repos
Deploy frontend independentlyNoYes
Deploy backend independentlyNoYes
Different tech stacksAwkwardClean
CI/CD per serviceOne pipelinePer service
Team separation (future)HardEasy
Repo sizeGets largeStays lean

The key insight is independent deployability. When your frontend and backend have separate repos, you can push a hotfix to one without touching the other. You can give a contractor access to only the frontend. You can set up separate CI/CD pipelines with different resource requirements.

A typical multi-service architecture for a SaaS product might look like this:

github.com/yourname/myproject          ← root / deployment repo
github.com/yourname/myproject-nextjs   ← frontend (Next.js)
github.com/yourname/myproject-backend  ← backend (Go, Node, etc.)
github.com/yourname/myproject-ai       ← AI service (FastAPI, Python)

The Role of Each Repository

Root / Deployment Repo

This repo orchestrates everything. It contains no business logic — only the infrastructure that holds the services together.

myproject/
├── myproject-nextjs/       ← frontend (separate repo, cloned here)
├── myproject-backend/      ← backend (separate repo, cloned here)
├── shared-assets/          ← JSON config, images (tracked here)
│   ├── content/
│   └── images/
├── docker-compose.yml      ← runs everything locally
├── docker-compose.prod.yml ← production overrides
├── nginx/                  ← reverse proxy config
├── docs/                   ← all project documentation
└── .env.example            ← variable template (never real secrets)

The golden rule: only deployment config and shared static assets live here. Never application code.

Frontend Repo

All your UI code — components, pages, API route wrappers, styles, and tests. Deploys to its own Docker container. Can be updated without touching the backend.

Backend Repo

All your server-side logic — API handlers, database models, migrations, and business rules. Completely independent from the frontend’s deploy cycle.

Shared Assets (inside root repo)

Things that both services need access to but aren’t code: content JSON files read by the backend, images served by Nginx, configuration references. Audio and video files should not go here — use rsync or a CDN instead (more on this below).


Three Deployment Approaches

Option A — Simple Clone (Best for Solo Developers)

The simplest possible approach: on your production server, clone each repo into the expected folder structure.

Initial server setup (one-time):

cd /opt
git clone https://github.com/yourname/myproject.git
cd myproject
git clone https://github.com/yourname/myproject-nextjs.git myproject-nextjs
git clone https://github.com/yourname/myproject-backend.git myproject-backend

Deploying an update:

cd /opt/myproject/myproject-nextjs && git pull origin main
cd /opt/myproject/myproject-backend && git pull origin main
cd /opt/myproject && git pull origin main
docker compose up -d --build

Pros: Immediately understandable, no extra Git concepts to learn, easy to debug.
Cons: Manual coordination across multiple pulls, no version pinning.
Best for: Solo developers, early-stage launches, small teams.


Option B — Git Submodules (For Teams Who Need Version Pinning)

Git submodules let the root repo officially track which exact commit of the frontend and backend is in production. When you update the submodule pointer, you’re recording “these three service versions were deployed together.”

.gitmodules file:

[submodule "myproject-nextjs"]
    path = myproject-nextjs
    url = https://github.com/yourname/myproject-nextjs.git

[submodule “myproject-backend”]

path = myproject-backend url = https://github.com/yourname/myproject-backend.git

Initial clone:

git clone --recurse-submodules https://github.com/yourname/myproject.git

Deploying:

git pull origin main
git submodule update --init --recursive
docker compose up -d --build

The gotcha you will hit:

Submodules are in detached HEAD state by default. If you clone, go into the submodule folder, and start editing — you’re not on any branch. Your commits are technically floating and can be lost. Always run git checkout main (or your working branch) inside each submodule before making changes.

Also: always push the submodule repo before pushing the root repo. If the root repo points to a commit that hasn’t been pushed to the submodule remote, anyone who clones your root repo will get an error.

Pros: Atomic, version-pinned deployments. Easy rollback to a known-good combo of service versions.
Cons: Extra complexity, easy to make mistakes, confusing for developers new to submodules.
Best for: Teams of 2+, projects where you need a clear record of what was deployed when.


Option C — Docker Registry (Industry Standard)

The most mature approach: your CI/CD pipeline builds a Docker image on every push to main and pushes it to a registry. The server never builds anything — it only pulls and runs pre-built images.

Developer pushes to main
        ↓
GitHub Actions builds Docker image
        ↓
Image pushed to GitHub Container Registry (ghcr.io)
        ↓
Server: docker compose pull && docker compose up -d
        ↓
New version live in ~30 seconds

docker-compose.yml on the server:

services:
  frontend:
    image: ghcr.io/yourname/myproject-nextjs:latest

  backend:
    image: ghcr.io/yourname/myproject-backend:latest

GitHub Actions workflow (.github/workflows/build.yml in each service repo):

name: Build and Push
on:
  push:
    branches: [main]

jobs:
  build:
    runs-on: ubuntu-latest
    steps:
      - uses: actions/checkout@v4
      - name: Login to GHCR
        uses: docker/login-action@v3
        with:
          registry: ghcr.io
          username: ${{ github.actor }}
          password: ${{ secrets.GITHUB_TOKEN }}
      - name: Build and push
        uses: docker/build-push-action@v5
        with:
          push: true
          tags: ghcr.io/yourname/myproject-nextjs:latest

Pros: Fastest deploys, identical images across environments, no build step on production server.
Cons: Requires CI/CD setup time upfront, image storage to manage.
Best for: Mature projects, teams deploying frequently, Phase 2+ with additional services.


Branching Strategy That Works Across Multiple Repos

Apply the same branching model to all repos consistently:

main          ← production only (what runs on the server)
  └── dev     ← active development, daily integration
       ├── feature/user-auth
       ├── feature/ai-scoring
       └── fix/login-redirect-bug
BranchWho pushesPurpose
mainDeliberate merge onlyStable, production-ready
devDaily workIntegration branch
feature/*Per-feature developmentIsolated work

Solo Developer Daily Workflow

# Pick ONE repo to work in — never context-switch mid-commit
cd myproject-nextjs

# Always start by pulling
git checkout dev
git pull origin dev

# Small change — commit directly to dev
git add .
git commit -m "Fix audio player volume not persisting"
git push origin dev

# Bigger feature — use a branch
git checkout -b feature/ai-writing-score
# work...
git checkout dev
git merge feature/ai-writing-score
git push origin dev
git branch -d feature/ai-writing-score

Deploying to Production

# 1. Merge each changed service's dev → main
cd myproject-backend
git checkout main && git merge dev && git push origin main

cd ../myproject-nextjs
git checkout main && git merge dev && git push origin main

cd ..
git checkout main && git merge dev && git push origin main

# 2. SSH to server and deploy
ssh root@YOUR_SERVER_IP
cd /opt/myproject
./scripts/deploy.sh

What Gets Committed Where

This is where most multi-repo confusion lives. Here’s a practical decision table:

FileRepoReason
React components, pages, hooksFrontend repoFrontend code
TypeScript types and utilitiesFrontend repoFrontend code
API handlers, models, routesBackend repoBackend code
Database migrations (SQL)Backend repoBackend schema
Content JSON filesRoot repoShared content
Images (.webp, .png)Root repoShared assets
docker-compose.ymlRoot repoOrchestration
Nginx configRoot repoDeployment config
.env.exampleRoot repoTemplate only
.env (real secrets)NowhereNever commit
Large binary/media filesNowhereUse rsync or CDN
node_modules/NowhereInstall from package.json
Build artifactsNowhereGenerated at build time

Handling Large Binary Files (Audio, Video, Large Images)

Git was not designed for large binary files. If you’re building something that involves audio or video content — a language learning app, a media platform, a podcast tool — do not store those files in git. Here’s why:

  • Binary files bloat git history permanently
  • Every regenerated file adds another full copy to history
  • GitHub will warn and eventually refuse large repos
  • Clone times become painfully slow

The solution: rsync to your server, CDN for production.

# From your local machine — only uploads new or changed files
rsync -avz --progress \
  /path/to/local/audio/ \
  root@YOUR_SERVER_IP:/opt/myproject/shared-assets/audio/

Add the exclusion to your .gitignore:

shared-assets/audio/
shared-assets/video/

When traffic grows, migrate to a CDN (Cloudflare R2 or AWS S3) and update your file references to point to CDN URLs. The rsync step goes away entirely.


Environment Variables Across Repos

Each service manages its own environment variables. The .env file is never in git — only .env.example with placeholder values.

Frontend (.env.local for development):

NEXT_PUBLIC_API_URL=http://localhost:8080

Backend (.env for development):

DATABASE_URL=postgres://user:password@localhost:5432/mydb
JWT_SECRET=your-secret-key
ENVIRONMENT=development

Production (created manually on the server, never committed):

DATABASE_URL=postgres://user:STRONG_PASSWORD@db:5432/mydb_prod
JWT_SECRET=LONG_RANDOM_PRODUCTION_KEY
ENVIRONMENT=production
FRONTEND_URL=https://yourdomain.com

The discipline here is simple: treat any .env file with a real value in it as if it were your bank password. It goes nowhere except the machine that needs it.


Common Mistakes (and How to Avoid Them)

Committing to the wrong repo

This happens constantly when you have nested directories. You’re in the root repo, you open a frontend file in your editor, you run git add . from the root — and you’ve just committed frontend code to the root repo’s history.

# The danger: you're in the root, but editing a nested file
cd /opt/myproject
vim myproject-nextjs/src/components/Header.tsx
git add .  # ← commits to root repo, NOT frontend repo

# The fix: always cd into the correct repo first
cd myproject-nextjs
git add src/components/Header.tsx
git commit -m "Fix header layout"

Make it a habit: before any git add, run git status and confirm you’re in the repo you think you are.

Pushing secrets

If you accidentally commit a .env file, the secret is compromised the moment you push — even if you delete it in the next commit, it stays in git history. Rotate the secret immediately, then clean history:

git filter-branch --force --index-filter \
  "git rm --cached --ignore-unmatch .env" HEAD
git push --force

Better yet: install a pre-commit hook or use a tool like git-secrets to prevent it from happening in the first place.

Working directly on main

# Never this:
git checkout main
# ... make changes ...
git push origin main  # Untested code goes straight to production

# Always this:
git checkout dev
# ... make changes, test locally ...
git checkout main && git merge dev && git push origin main

Forgetting to pull before starting

In a multi-repo project, you might work on the frontend one day and the backend the next, from different machines. Always start with a pull:

git checkout dev
git pull origin dev
# Now start working

(Submodules only) Submodule in detached HEAD state

After cloning or updating submodules, always verify you’re on an actual branch:

cd myproject-nextjs
git status  # "HEAD detached at abc1234" — this is the problem
git checkout main  # Fix it

Choosing Your Approach: A Decision Guide

Solo or team of 1-2?
  └── Yes → Option A (simple clone) to start
       └── Adding a complex AI service in Phase 2?
            └── Yes → Move to Option C (Docker Registry)
            └── No  → Stay with Option A

Team of 3+?
  └── Yes → Option B (Git Submodules) or Option C (Docker Registry)
       └── Have dedicated DevOps?
            └── Yes → Option C — fastest deploys, cleanest workflow
            └── No  → Option B — version pinning without registry overhead

Quick Reference: Daily Commands

Starting work:

cd myproject-nextjs   # or myproject-backend
git checkout dev
git pull origin dev

Saving work:

git add .
git commit -m "Clear description of what changed and why"
git push origin dev

Deploying to production (manual, you decide when):

# On your machine: merge dev → main for each changed repo
git checkout main && git merge dev && git push origin main

# On the server (Hetzner, DigitalOcean, etc.)
ssh root@YOUR_SERVER_IP
cd /opt/myproject
git pull origin main
cd myproject-nextjs && git pull origin main && cd ..
cd myproject-backend && git pull origin main && cd ..
docker compose up -d --build

Checking production status:

docker compose ps
docker compose logs -f backend
curl https://api.yourdomain.com/health

Rolling back a broken deployment:

cd myproject-nextjs
git log --oneline          # Find the last good commit
git checkout abc1234       # Go back to it
docker compose up -d --build frontend

Final Thoughts

Multi-repo architecture introduces real coordination overhead. That overhead pays for itself when services are genuinely independent — different tech stacks, different deploy cadences, or different teams working on each. If you’re early-stage and solo, start with Option A (simple clone) and keep it simple. Add complexity only when the pain of not having it becomes concrete.

The branching strategy, the .env discipline, the file placement rules, and the deployment scripts can all be defined once and then just followed. Getting those conventions right at the start saves a lot of painful rework later.

If this guide helped you, I write about full-stack development, cloud deployments, and building SaaS products as a solo developer at edupala.com.


Tags: git, devops, docker, multi-repo, deployment, hetzner, self-hosted, solo developer

Leave a Comment

Your email address will not be published. Required fields are marked *

Scroll to Top