Skip to content

Frigate NVR with GPU-offloading in Docker on LXC on Proxmox, integrated with Home Assistant and Authentik

I recently set up Frigate NVR at home, and wanted to integrate it into my existing ecosystems. I wanted to offload transcodes and detection to my GPU, use my existing Authentik setup for authentication, and integrate it with my Home Assistant setup. All while running within Docker containers, running inside LXC on my Proxmox server. This is how I configured everything.

There are a couple of steps that needs to be done in the correct order. First we’ll have to configure and install GPU drivers on the host and inside the LXC container. Then we need to make sure we have the latest Docker binaries and NVIDIA runtimes. Once that is done, we can start on the initial Frigate setup, followed by reverse-proxy and Authentik setup. Once that is done, we can look into integrating Home Assistant with Frigate.

Docker & GPU drivers

First step is to make sure we have everything we need for the GPU offloading. This entails installing GPU drivers on the Proxmox host, and inside the LXC container, and also have the latest Docker binaries. I’ve already written a separate guide of how to do this for Plex, which is the same steps we need for using with Frigate. Follow my other guide, and just skip the last step regarding Plex (unless you also want Plex in addition to Frigate).

When it comes to the Docker configuration, there are many different setups and flavors. I’m not going to cover all the details here, and for the most part it’s not relevant for the purpose of this guide. I’ll paste my configurations in their entirety for reference, but please don’t copy-paste them blindly without knowing what they all do. Also, some of my configuration might at this point be outdated or simply ignored by Docker — they were initally used to get proper IPv4 + IPv6 dual-stack support in Docker several years ago.

# /etc/docker/daemon.json
{
  "storage-driver": "overlay2",
  "runtimes": {
    "nvidia": {
      "args": [],
      "path": "nvidia-container-runtime"
    }
  },
  "experimental": true,
  "ip6tables": true,
  "iptables": true,
  "ipv6": true,
  "fixed-cidr": "10.10.10.0/24",
  "fixed-cidr-v6": "2001:67c:197c:1000:10::/64",
  "default-address-pools": [
    {
      "base": "10.10.20.0/23",
      "size": 24
    }
  ]
}


# docker-compose.yml for DNS service, since the built-in Docker DNS at one point didn't
# fully support ipv6. we also create the main Docker network used by the other
# docker-compose configurations we have
services:
  dns:
    container_name: dns
    hostname: dns
    image: tianon/rawdns
    restart: unless-stopped
    command: rawdns /etc/rawdns/config.json
    volumes:
      - /var/run/docker.sock:/var/run/docker.sock:ro
      - /srv/docker/docker1/dns/data/rawdns/config.json:/etc/rawdns/config.json
    networks:
      default:
        ipv4_address: 10.10.30.53
        ipv6_address: 2001:67c:197c:1000:30::53

networks:
  default:
    name: docker1
    enable_ipv6: true
    driver_opts:
      com.docker.network.enable_ipv6: "true"
    ipam:
      driver: default
      config:
        - subnet: 2001:67c:197c:1000:30::/64
        - subnet: 10.10.30.0/24

Frigate base setup

Once you have sorted out the GPU drivers, and verified that it works within a Docker container, we can have a look at the setup for Frigate. They have pretty good documenation that I encourage you to read, also for the initial install and required Docker configuration here. There are a couple of things we need to decide on, specifically where to store our media/video data, and what shm-size we need.

For storage, I’ve chosen to have a dedicated set of 1.6TB SSDs in ZFS mirror, as the main media storage. The ZFS pool name is cctv, and I’ve mounted that into the LXC container by adding the following into the LXC config (adjust the number if you have existing mount config).

mp0: /cctv/frigate,mp=/srv/cctv/frigate

For the shm-size, that entirely depend on your hardware running on the Proxmox server, the amount of resources you’ve assigned the LXC container, and how many cameras you’re going to add. The installation guide I linked to have some calculations you can use to find a suitable shm-size:

# Template for one camera without logs, replace <width> and <height>
$ python -c 'print("{:.2f}MB".format((<width> * <height> * 1.5 * 20 + 270480) / 1048576))'

# Example for 1280x720, including logs
$ python -c 'print("{:.2f}MB".format((1280 * 720 * 1.5 * 20 + 270480) / 1048576 + 40))'
66.63MB

# Example for eight cameras detecting at 1280x720, including logs
$ python -c 'print("{:.2f}MB".format(((1280 * 720 * 1.5 * 20 + 270480) / 1048576) * 8 + 40))'
253MB

Since I have the hardware for it, I simply used a value that most likely never will be a problem for my setup: 512MB. With that in mind, we should have enough for a base configuration.

# .env
RESTART=unless-stopped
DOMAIN_NAME=foobar.com
RESOLV_CONF=/srv/docker/docker1/dns/data/resolv.conf
CONFIG_BASE=/srv/docker/docker1/cctv/data
TZ=Europe/Oslo
FRIGATE_MQTT_USER=frigate
FRIGATE_MQTT_PASSWORD=supersecret
FRIGATE_RTSP_USER=cam
FRIGATE_RTSP_PASSWORD=supersecret

# docker-compose.yml
services:
  frigate:
    container_name: frigate
    hostname: frigate
    image: ghcr.io/blakeblackshear/frigate:stable-tensorrt
    restart: ${RESTART}
    shm_size: "512mb"
    environment:
      TZ: ${TZ}
      FRIGATE_MQTT_USER: ${FRIGATE_MQTT_USER}
      FRIGATE_MQTT_PASSWORD: ${FRIGATE_MQTT_PASSWORD}
      FRIGATE_RTSP_USER: ${FRIGATE_RTSP_USER}
      FRIGATE_RTSP_PASSWORD: ${FRIGATE_RTSP_PASSWORD}
      NVIDIA_VISIBLE_DEVICES: all
      NVIDIA_DRIVER_CAPABILITIES: compute,video,utility
    deploy:
      resources:
        reservations:
          devices:
            - capabilities: [gpu]
              count: all
    expose:
      - "8971" # Authenticated WebUI/API
      - "5000" # Unauthenticated WebUI/API
    ports:
      - "8554:8554" # RTSP feeds
      - "8555:8555/tcp" # WebRTC over tcp
      - "8555:8555/udp" # WebRTC over udp
    volumes:
      - ${RESOLV_CONF}:/etc/resolv.conf:ro
      - ${CONFIG_BASE}/frigate/config:/config
      - ${CONFIG_BASE}/frigate/models:/models
      - /srv/cctv/frigate:/media/frigate
      - type: tmpfs
        target: /tmp/cache
        tmpfs:
          size: 4G
    labels:
      traefik.enable: true
      traefik.http.routers.frigate.entrypoints: websecure
      traefik.http.routers.frigate.rule: Host(`cctv.${DOMAIN_NAME}`)
      traefik.http.routers.frigate.tls: true
      traefik.http.routers.frigate.middlewares: authentik@docker
      traefik.http.services.frigate-cctv.loadbalancer.server.port: 8971
      traefik.http.services.frigate-cctv.loadbalancer.server.scheme: https
      traefik.http.routers.frigate-auth.entrypoints: websecure
      traefik.http.routers.frigate-auth.rule: Host(`cctv.${DOMAIN_NAME}`) && PathPrefix(`/outpost.goauthentik.io/`)
      traefik.http.routers.frigate-auth.tls: true
      traefik.http.routers.frigate-auth.service: authentik-server-web

networks:
  default:
    external: true
    name: docker1

From the configuration above, there are several aspects. The two NVIDIA entries in environment: and the entire deploy: section are to ensure GPU offloading is available for Frigate. We set FRIGATE_*_USER + FRIGATE_*_PASSWORD so that we can use it inside the Frigate config.yaml. We use the authenticated WebUI/API port (8971), which will automatically authenticate us via HTTP headers once that is configured (see further down). We expose the RTSP/WebRTC-ports externally, while the WebUI/API is only exposed within Docker. We mount /tmp/cache as tmpfs (RAM disk) to avoid unnecessary writes to the SSDs. The size can probably be smaller than 4GB, and I might adjust it in the future. We mount the cctv pool into /media/frigate, and also the localtime to get the same timezone as the LXC container. The labels at the end is to configure the Traefik reverse-proxy (which is somewhat outside the scope of this guide, but left for reference).

# first time running frigate it prints the admin username + password
docker-compose pull
docker-compose up
[…]
frigate  | 2026-01-12 […]  INFO    : ********************************************************
frigate  | 2026-01-12 […]  INFO    : ********************************************************
frigate  | 2026-01-12 […]  INFO    : ***    Auth is enabled, but no users exist.          ***
frigate  | 2026-01-12 […]  INFO    : ***    Created a default user:                       ***
frigate  | 2026-01-12 […]  INFO    : ***    User: admin                                   ***
frigate  | 2026-01-12 […]  INFO    : ***    Password: supersecretfoobarbazsmileyface      ***
frigate  | 2026-01-12 […]  INFO    : ********************************************************
frigate  | 2026-01-12 […]  INFO    : ********************************************************

After starting Frigate, you should be able to log in using the username + password printed in the logs. In the System Metrics page you should see the GPU listed. You might also want to download a model for object detection, like YOLO, in order to utilize GPU-based detection. A list of available models and how to download them is here. You could also use the custom models that Frigate+ gives you. With my RTX 2000 Ada, I’m starting with yolov9c 640, and work from there. The model file can be built with Docker:

docker build . --build-arg MODEL_SIZE=c --build-arg IMG_SIZE=640 --output . -f- <<'EOF'
FROM python:3.11 AS build
RUN apt-get update && apt-get install --no-install-recommends -y libgl1 && rm -rf /var/lib/apt/lists/*
COPY --from=ghcr.io/astral-sh/uv:0.8.0 /uv /bin/
WORKDIR /yolov9
ADD https://github.com/WongKinYiu/yolov9.git .
RUN uv pip install --system -r requirements.txt
RUN uv pip install --system onnx==1.18.0 onnxruntime onnx-simplifier>=0.4.1 onnxscript
ARG MODEL_SIZE
ARG IMG_SIZE
ADD https://github.com/WongKinYiu/yolov9/releases/download/v0.1/yolov9-${MODEL_SIZE}-converted.pt yolov9-${MODEL_SIZE}.pt
RUN sed -i "s/ckpt = torch.load(attempt_download(w), map_location='cpu')/ckpt = torch.load(attempt_download(w), map_location='cpu', weights_only=False)/g" models/experimental.py
RUN python3 export.py --weights ./yolov9-${MODEL_SIZE}.pt --imgsz ${IMG_SIZE} --simplify --include onnx
FROM scratch
ARG MODEL_SIZE
ARG IMG_SIZEx
COPY --from=build /yolov9/yolov9-${MODEL_SIZE}.onnx /yolov9-${MODEL_SIZE}-${IMG_SIZE}.onnx
EOF

This will produce the model in a .onnx file which we can use in Frigate by adding some config to config.yaml:

detectors:
    onnx_0:
        type: onnx
    onnx_1:
        type: onnx

model:
    model_type: yolo-generic
    width: 640
    height: 640
    input_tensor: nchw
    input_dtype: float
    path: /models/yolov9-c-640.onnx
    labelmap_path: /labelmap/coco-80.txt

My initial config can be seen below, with what I think are somewhat sensible settings. I added a single UniFi G3 camera to the setup for testing video and audio, that I also needed to rotate 90 degrees (for some more complex setup/config). This is by no means a copy-paste config that you should use, but gives you a good starting point.

go2rtc:
  rtsp:
    username: "{FRIGATE_RTSP_USER}"
    password: "{FRIGATE_RTSP_PASSWORD}"
  streams:
    stream_garage:
      - rtsp://{FRIGATE_RTSP_USER}:{FRIGATE_RTSP_PASSWORD}@10.10.10.10:554/s0
    stream_garage_rotated:
      - "ffmpeg:stream_garage#video=h264#rotate=90#audio=aac"
      - "ffmpeg:stream_garage#video=h264#rotate=90#audio=opus"
    stream_garage_sub:
      - rtsp://{FRIGATE_RTSP_USER}:{FRIGATE_RTSP_PASSWORD}@10.10.10.10:554/s1
    stream_garage_sub_rotated:
      - "ffmpeg:stream_garage_sub#video=h264#rotate=90#audio=aac"
      - "ffmpeg:stream_garage_sub#video=h264#rotate=90#audio=opus"

cameras:
  garage:
    enabled: true
    ffmpeg:
      output_args:
        record: preset-record-ubiquiti
      inputs:
        - path: rtsp://127.0.0.1:8554/stream_garage_rotated
          input_args: preset-rtsp-restream
          roles:
            - record
        - path: rtsp://127.0.0.1:8554/stream_garage_sub_rotated
          input_args: preset-rtsp-restream
          roles:
            - detect
            - audio
    live:
      streams:
        Main stream: stream_garage_rotated
        Sub stream: stream_garage_sub_rotated

detect:
  enabled: true
  fps: 5

motion:
  enabled: true

audio:
  enabled: true
  listen:
    - bark
    - fire_alarm
    - knock
    - scream
    - slam
    - speech
    - yell

record:
  enabled: True
  retain:
    days: 3
    mode: all
  alerts:
    retain:
      days: 90
      mode: motion
  detections:
    retain:
      days: 90
      mode: motion

detectors:
  onnx_0:
    type: onnx
  onnx_1:
    type: onnx

model:
  model_type: yolo-generic
  width: 640
  height: 640
  input_tensor: nchw
  input_dtype: float
  path: /models/yolov9-c-640.onnx
  labelmap_path: /labelmap/coco-80.txt

semantic_search:
  enabled: True
  model: "jinav1"
  model_size: large

Authentik

I will not go into details on how to set up Authentik from scratch, but just the relevant changes that’s needed to let Authentik take care of Frigate authentication. At this point I’m going to assume you’ve got both Authentik and Frigate running, and are ready to make relevant configuration changes.

We need to send a group value of either admin or viewer in the headers. We could create a group called admin and/or viewer in Authentik, assign users to those, and simply filter on that in the Frigate configuration (since Authentik sends all the user groups in the X-Authentik-Groups header). However, those group names would be ambiguous if you have many other systems/groups within Authentik. I wanted to call mine frigate-admin and frigate-viewer, but still be able to pass admin or viewer in the headers.

This can be done in two ways; either by setting it statically in the groups, or by creating a property mapping and assigning it to the Frigate provider (Selected Scopes under Advanced protocol settings), and then set the frigate_role: admin attribute in the frigate-admin group and frigate_role: viewer attribute in the frigate-viewer group.

return {
    "ak_proxy": {
        "user_attributes": {
            "additionalHeaders": {
                "Remote-Groups": request.user.group_attributes().get("frigate_role", None)
            }
        }
    }
}

This approach is more flexible if you need more complex attributes and/or filter on attributes. However, for our usecase, its much simpler to just set the header statically within the group via the additionalHeaders attribute.

# frigate-admin group
additionalHeaders:
    Remote-Groups: admin
    X-Proxy-Secret: some-long-secret-string

# frigate-viewer group
additionalHeaders:
    Remote-Groups: viewer
    X-Proxy-Secret: the-same-long-secret-string-from-above

Frigate supports a set of pre-defined headers (list available here), but I chose Remote-Groups as it doesn’t conflict with anything else I’m using. We also want to send the X-Proxy-Secret to ensure only traffic from Authentik is accepted by Frigate. If you use Traefik, like I do, you must also remember to add the two headers (X-Proxy-Secret and Remote-Groups) to the list of forwarded headers via the authResponseHeaders setting. I’m using labels in Docker to achieve this:

traefik.http.middlewares.authentik.forwardauth.authResponseHeaders: X-authentik-username,X-authentik-groups,X-authentik-entitlements,X-authentik-email,X-authentik-name,X-authentik-uid,X-authentik-jwt,X-authentik-meta-jwks,X-authentik-meta-outpost,X-authentik-meta-provider,X-authentik-meta-app,X-authentik-meta-version,X-Proxy-Secret,Remote-Groups

You can test that the two headers is sent correctly by assigning your Authentik-user to the frigate-admin group, and use something like whoami to test that we get the X-Proxy-Secret: some-secret and Remote-Groups: admin headers.

At this point we should be ready to configure Frigate to accept headers from Authentik for authentication. Add the following to your Frigate config.yaml:

auth:
  enabled: false
proxy:
  auth_secret: the-X-Proxy-Secret-you-set-in-Authentik
  separator: "|"
  header_map:
    user: x-authentik-username
    role: remote-groups

Restart Traefik + Frigate, and you should now be logging in as the Authentik-user with admin-privileges in Frigate. Users added to the frigate-viewer group will get ready-only access to Frigate.

Home Assistant

After I had installed Frigate, I contemplated migrating my HA-setup from HAOS to Docker, to have more flexibility. One advantage is being able to upgrade/downgrade to specific versions, we get all HA-config together with the rest of my Docker-config in git, etc. After some research, and weighing the pros and cons, I ultimately decided to keep using HAOS, at least for now. A lot of the “negative” talks about HAOS seems to stem from old versions. This is also my experience; I’ve been using HAOS for several years now without a single issue or hickup.

In order to integrate Frigate with Home Assistant, they both need to use the same MQTT broker. If you haven’t already installed the MQTT-service in HA together with a MQTT-broker, please do this now. There are good instructions on how to do this here. I chose to use Mosquitto as the MQTT-broker. I use the Z-Wave JS and Zigbee2MQTT addons for device connectivity, all using MQTT as the backend.

Next we need to set up a user in Mosquitto that Frigate can use to connect. Go to Settings > Add-ons > Mosquitto broker > Configuration > Add (under “Logins”). Add a suitable username (like frigate), and set a strong password. Then add the following to Frigate config.yaml:

mqtt:
  enabled: true
  host: ha.server.foobar.com
  user: "{FRIGATE_MQTT_USER}"
  password: "{FRIGATE_MQTT_PASSWORD}"

Replace ha.server.foobar.com with the IP or hostname of your MQTT-broker (if you’re running HAOS, it’s the IP-address of HAOS itself you’ll use there). Update the user/password in the .env-file, restart Frigate, and MQTT should be configured.

We also need to configure media_source. Add the following to your HA configuration.yaml file (no need to add anything else), and restart HA.

media_source:

Then we need to add the Frigate integration in HA, which is done via HACS. If you haven’t installed HACS on your HA, please install it by following the official instructions. Once installed, navigate to HA > HACS > Click in the search bar and type “Frigate” > Frigate > Download. It will then ask you to restart HA. Then we can add the integration from HA > Settings > Devices & Services > Add Integration > Frigate. It will ask you for the URL to Frigate. This is where it becomes slightly tricky, since we only have three ways to access Frigate from HA:

  1. We use the Traefik/Authentik URL.
  2. We expose the authenticated Frigate-port 8971 and use that.
  3. We expose the unauthenticated Frigate-port 5000 and use that.

The problem with option 1) is that Authentik have no way of bypassing the authentication just for requests coming from HA, and the username/password-options in the Frigate-config in HA does not work with the Authentik-login process. I was hoping that you could enable Frigate user authentication and use the header-login at the same time, but that doesn’t seem to work (the moment you set auth: enabled: true, it asks for login information, even if the headers are set). This eliminates option 2), as we have no way of injecting the X-Proxy-Secret header from HA. But I also don’t want to use option 3), since that would leave Frigate way too exposed. Since HA and Frigate are on separate servers in my setup, that means I would have to expose port 5000 to the Docker host, which would make it available from all devices in that VLAN. Sure, I have a dedicated server-VLAN, but there are lots of other servers on that VLAN that I don’t want to be able to access Frigate.

This leaves us with only one potential option; try to make a separate rule/bypass in Traefik that whitelists traffic coming from the HA IP, and that also injects the required headers (X-Authentik-Username, Remote-Groups, and X-Proxy-Secret). After some tinkering, I found a working Traefik-configuration that achieves this. We add these to the labels:-section in the Frigate docker-compose.yml file:

traefik.http.middlewares.frigate-ha-headers.headers.customrequestheaders.X-Proxy-Secret: the-same-secret-as-previous-config
traefik.http.middlewares.frigate-ha-headers.headers.customrequestheaders.X-Authentik-Username: homeassistant
traefik.http.middlewares.frigate-ha-headers.headers.customrequestheaders.Remote-Groups: admin
traefik.http.routers.frigate_ha.entrypoints: websecure
traefik.http.routers.frigate_ha.rule: Host(`cctv.${DOMAIN_NAME}`) && (ClientIP(`10.10.50.10`) || ClientIP(`2001:67c:197c:1000:50::10`))
traefik.http.routers.frigate_ha.tls: true
traefik.http.routers.frigate_ha.middlewares: frigate-ha-headers@docker
traefik.http.routers.frigate_ha.priority: 100

Change the IP-addresses to whatever IP-address your HA instance has. This config effectively makes requests coming from the given IPs to be authenticated in Frigate as the homeassistant user with admin privileges. We then simply use the URL outside Traefik (https://cctv.${DOMAIN-NAME}) as the Frigate URL in the HA setup. In my case it uses Lets Encrypt certificates, so we leave Validate SSL enabled, and username/password fields blank. This should work, and all cameras in Frigate should be added as devices in HA.

Once the Frigate integration has been configured, we also want to enable the native WebRTC support, so that HA doesn’t pull the RTSP-streams and have to transcode these in go2rtc again (since we already have go2rtc transcode in Frigate). HA > Settings > Devices & services > Frigate > Click the option icon > Enable Use Frigate-native WebRTC support > Submit. I also set the Disallow unauthenticated notitication access after seconds to 86400 (24 hours). We can also set the RTSP URL template — I’m not sure if it’s strictly needed when we have enabled the native WebRTC support, but if you for some reason can’t enable that, or something else isn’t working, I found it best to just set it properly. Since we have RTSP authentication enabled, and use a reverse-proxy, we need to change this:

rtsp://cam:supersecret@frigate.server.foobar.com:8554/stream_{{ name|lower }}

This assumes that you have go2rtc running in Frigate for all camera streams, and that your configuration is consistent in the sense that all go2rtc streams:-entries have the stream_$cameraname-syntax, and that the entries under cameras: also use the same $cameraname.

Final configs

That should be it. Below are the complete content of my configuration files after all of the above changes. Please don’t just blindly copy-paste these and complain if anything isn’t working (-:

/etc/docker/daemon.json:

{
  "storage-driver": "overlay2",
  "runtimes": {
    "nvidia": {
      "args": [],
      "path": "nvidia-container-runtime"
    }
  },
  "experimental": true,
  "ip6tables": true,
  "iptables": true,
  "ipv6": true,
  "fixed-cidr": "10.10.10.0/24",
  "fixed-cidr-v6": "2001:67c:197c:1000:10::/64",
  "default-address-pools": [
    {
      "base": "10.10.20.0/23",
      "size": 24
    }
  ]
}

/srv/docker/docker1/dns/docker-compose.yml. For DNS service, since the built-in Docker DNS at one point didn’t fully support IPv6. We also create the main Docker network used by the other docker-compose configurations we have.

services:
  dns:
    container_name: dns
    hostname: dns
    image: tianon/rawdns
    restart: unless-stopped
    command: rawdns /etc/rawdns/config.json
    volumes:
      - /var/run/docker.sock:/var/run/docker.sock:ro
      - /srv/docker/docker1/dns/data/rawdns/config.json:/etc/rawdns/config.json
    networks:
      default:
        ipv4_address: 10.10.30.53
        ipv6_address: 2001:67c:197c:1000:30::53

networks:
  default:
    name: docker1
    enable_ipv6: true
    driver_opts:
      com.docker.network.enable_ipv6: "true"
    ipam:
      driver: default
      config:
        - subnet: 2001:67c:197c:1000:30::/64
        - subnet: 10.10.30.0/24

/srv/docker/docker1/dns/data/rawdns/config.json:

{
  "docker1.": {
    "type": "containers",
    "socket": "unix:///var/run/docker.sock"
  },
  "docker.": {
    "type": "containers",
    "socket": "unix:///var/run/docker.sock"
  },
  ".": {
    "type": "forwarding",
    "nameservers": [
      "2001:67c:197c:1000:60::110",
      "2001:67c:197c:1000:60::210",
      "10.10.10.110"
    ]
  }
}

/srv/docker/docker1/cctv/.env:

RESTART=unless-stopped
DOMAIN_NAME=foobar.com
RESOLV_CONF=/srv/docker/docker1/dns/data/resolv.conf
CONFIG_BASE=/srv/docker/docker1/cctv/data
TZ=Europe/Oslo
FRIGATE_MQTT_USER=frigate
FRIGATE_MQTT_PASSWORD=supersecret
FRIGATE_RTSP_USER=cam
FRIGATE_RTSP_PASSWORD=supersecret

/srv/docker/docker1/cctv/docker-compose.yml:

services:
  frigate:
    container_name: frigate
    hostname: frigate
    image: ghcr.io/blakeblackshear/frigate:stable-tensorrt
    restart: ${RESTART}
    shm_size: "512mb"
    environment:
      TZ: ${TZ}
      FRIGATE_MQTT_USER: ${FRIGATE_MQTT_USER}
      FRIGATE_MQTT_PASSWORD: ${FRIGATE_MQTT_PASSWORD}
      FRIGATE_RTSP_USER: ${FRIGATE_RTSP_USER}
      FRIGATE_RTSP_PASSWORD: ${FRIGATE_RTSP_PASSWORD}
      NVIDIA_VISIBLE_DEVICES: all
      NVIDIA_DRIVER_CAPABILITIES: compute,video,utility
    deploy:
      resources:
        reservations:
          devices:
            - capabilities: [gpu]
              count: all
    expose:
      - "8971" # Authenticated WebUI/API
      - "5000" # Unauthenticated WebUI/API
    ports:
      - "8554:8554" # RTSP feeds
      - "8555:8555/tcp" # WebRTC over tcp
      - "8555:8555/udp" # WebRTC over udp
    volumes:
      - ${RESOLV_CONF}:/etc/resolv.conf:ro
      - ${CONFIG_BASE}/frigate/config:/config
      - ${CONFIG_BASE}/frigate/models:/models
      - /srv/cctv/frigate:/media/frigate
      - type: tmpfs
        target: /tmp/cache
        tmpfs:
          size: 4G
    labels:
      traefik.enable: true
      traefik.http.routers.frigate.entrypoints: websecure
      traefik.http.routers.frigate.rule: Host(`cctv.${DOMAIN_NAME}`)
      traefik.http.routers.frigate.tls: true
      traefik.http.routers.frigate.middlewares: authentik@docker
      traefik.http.services.frigate-cctv.loadbalancer.server.port: 8971
      traefik.http.services.frigate-cctv.loadbalancer.server.scheme: https
      traefik.http.routers.frigate-auth.entrypoints: websecure
      traefik.http.routers.frigate-auth.rule: Host(`cctv.${DOMAIN_NAME}`) && PathPrefix(`/outpost.goauthentik.io/`)
      traefik.http.routers.frigate-auth.tls: true
      traefik.http.routers.frigate-auth.service: authentik-server-web
      traefik.http.middlewares.frigate-ha-headers.headers.customrequestheaders.X-Proxy-Secret: the-same-secret-as-previous-config
      traefik.http.middlewares.frigate-ha-headers.headers.customrequestheaders.X-Authentik-Username: homeassistant
      traefik.http.middlewares.frigate-ha-headers.headers.customrequestheaders.Remote-Groups: admin
      traefik.http.routers.frigate_ha.entrypoints: websecure
      traefik.http.routers.frigate_ha.rule: Host(`cctv.${DOMAIN_NAME}`) && (ClientIP(`10.10.50.10`) || ClientIP(`2001:67c:197c:1000:50::10`))
      traefik.http.routers.frigate_ha.tls: true
      traefik.http.routers.frigate_ha.middlewares: frigate-ha-headers@docker
      traefik.http.routers.frigate_ha.priority: 100

networks:
  default:
    external: true
    name: docker1

Frigate config.yaml:

mqtt:
  enabled: true
  host: ha.server.foobar.com
  user: "{FRIGATE_MQTT_USER}"
  password: "{FRIGATE_MQTT_PASSWORD}"

go2rtc:
  rtsp:
    username: "{FRIGATE_RTSP_USER}"
    password: "{FRIGATE_RTSP_PASSWORD}"
  streams:
    stream_garage:
      - rtsp://{FRIGATE_RTSP_USER}:{FRIGATE_RTSP_PASSWORD}@10.10.10.10:554/s0
    stream_garage_rotated:
      - "ffmpeg:stream_garage#video=h264#rotate=90#audio=aac"
      - "ffmpeg:stream_garage#video=h264#rotate=90#audio=opus"
    stream_garage_sub:
      - rtsp://{FRIGATE_RTSP_USER}:{FRIGATE_RTSP_PASSWORD}@10.10.10.10:554/s1
    stream_garage_sub_rotated:
      - "ffmpeg:stream_garage_sub#video=h264#rotate=90#audio=aac"
      - "ffmpeg:stream_garage_sub#video=h264#rotate=90#audio=opus"

cameras:
  garage:
    enabled: true
    ffmpeg:
      output_args:
        record: preset-record-ubiquiti
      inputs:
        - path: rtsp://127.0.0.1:8554/stream_garage_rotated
          input_args: preset-rtsp-restream
          roles:
            - record
        - path: rtsp://127.0.0.1:8554/stream_garage_sub_rotated
          input_args: preset-rtsp-restream
          roles:
            - detect
            - audio
    live:
      streams:
        Main stream: stream_garage_rotated
        Sub stream: stream_garage_sub_rotated

detect:
  enabled: true
  fps: 5

motion:
  enabled: true

audio:
  enabled: true
  listen:
    - bark
    - fire_alarm
    - knock
    - scream
    - slam
    - speech
    - yell

record:
  enabled: True
  retain:
    days: 3
    mode: all
  alerts:
    retain:
      days: 90
      mode: motion
  detections:
    retain:
      days: 90
      mode: motion

detectors:
  onnx_0:
    type: onnx
  onnx_1:
    type: onnx

model:
  model_type: yolo-generic
  width: 640
  height: 640
  input_tensor: nchw
  input_dtype: float
  path: /models/yolov9-c-640.onnx
  labelmap_path: /labelmap/coco-80.txt

semantic_search:
  enabled: True
  model: "jinav1"
  model_size: large

auth:
  enabled: false

proxy:
  auth_secret: the-X-Proxy-Secret-you-set-in-Authentik
  separator: "|"
  header_map:
    user: x-authentik-username
    role: remote-groups

version: 0.16-0

Leave a Reply

Your email address will not be published. Required fields are marked *