备份、聚合和恢复(在线)

Neo4j 使用 Admin Service 执行备份,该服务仅在 Kubernetes 集群内部可用,且应保护其访问。有关更多信息,请参阅访问 Neo4j

准备将数据库备份到云提供商(AWS、GCP 和 Azure)存储桶

您可以使用 neo4j/neo4j-admin Helm chart 将 Neo4j 数据库备份到任何云提供商(AWS、GCP 和 Azure)存储桶。neo4j/neo4j-admin Helm chart 还支持备份多个数据库、GCP、AWS 和 Azure 的工作负载身份集成,以及适用于非 TLS/SSL 端点的 MinIO(一种 AWS S3 兼容对象存储 API)。

先决条件

在备份数据库并将其上传到存储桶之前,请验证您是否具备以下条件

创建 Kubernetes Secret

您可以使用以下任一选项创建包含可访问云提供商存储桶凭证的 Kubernetes Secret

使用您的 GCP 服务帐户 JSON 密钥文件创建名为 gcpcreds 的 secret。JSON 密钥文件包含拥有存储桶访问权限的服务帐户的所有详细信息。

kubectl create secret generic gcpcreds --from-file=credentials=/path/to/gcpcreds.json
  1. 按以下格式创建凭证文件

    [ default ]
    region = us-east-1
    aws_access_key_id = <your-aws_access_key_id>
    aws_secret_access_key = <your-aws_secret_access_key>
  2. 通过凭证文件创建名为 awscreds 的 secret

    kubectl create secret generic awscreds --from-file=credentials=/path/to/your/credentials
  1. 按以下格式创建凭证文件

    AZURE_STORAGE_ACCOUNT_NAME=<your-azure-storage-account-name>
    AZURE_STORAGE_ACCOUNT_KEY=<your-azure-storage-account-key>
  2. 通过凭证文件创建名为 azurecred 的 secret

    kubectl create secret generic azurecred --from-file=credentials=/path/to/your/credentials

配置备份参数

您可以通过使用 secretNamesecretKeyName 参数,或通过将 Kubernetes 服务帐户映射到工作负载身份集成来配置 backup-values.yaml 文件中的备份参数。

以下示例展示了执行备份到云提供商存储桶所需的最低配置。有关可用备份参数的更多信息,请参阅 备份参数

使用 secretNamesecretKeyName 参数配置 backup-values.yaml 文件

neo4j:
  image: "neo4j/helm-charts-backup"
  imageTag: "2025.05.0"
  jobSchedule: "* * * * *"
  successfulJobsHistoryLimit: 3
  failedJobsHistoryLimit: 1
  backoffLimit: 3

backup:
  bucketName: "my-bucket"
  databaseAdminServiceName:  "standalone-admin" #This is the Neo4j Admin Service name.
  database: "neo4j,system"
  cloudProvider: "gcp"
  secretName: "gcpcreds"
  secretKeyName: "credentials"

consistencyCheck:
  enabled: true
neo4j:
  image: "neo4j/helm-charts-backup"
  imageTag: "2025.05.0"
  jobSchedule: "* * * * *"
  successfulJobsHistoryLimit: 3
  failedJobsHistoryLimit: 1
  backoffLimit: 3

backup:
  bucketName: "my-bucket"
  databaseAdminServiceName:  "standalone-admin"
  database: "neo4j,system"
  cloudProvider: "aws"
  secretName: "awscreds"
  secretKeyName: "credentials"

consistencyCheck:
  enabled: true
neo4j:
  image: "neo4j/helm-charts-backup"
  imageTag: "2025.05.0"
  jobSchedule: "* * * * *"
  successfulJobsHistoryLimit: 3
  failedJobsHistoryLimit: 1
  backoffLimit: 3

backup:
  bucketName: "my-bucket"
  databaseAdminServiceName:  "standalone-admin"
  database: "neo4j,system"
  cloudProvider: "azure"
  secretName: "azurecreds"
  secretKeyName: "credentials"

consistencyCheck:
  enabled: true

使用服务帐户工作负载身份集成配置 backup-values.yaml 文件

在某些情况下,将具有工作负载身份集成的 Kubernetes 服务帐户分配给 Neo4j 备份 pod 可能会很有用。当您想要提高安全性并对 pod 拥有更精确的访问控制时,这尤其重要。这样做可以确保根据 pod 在云生态系统中的身份授予对资源的安全访问。有关设置具有工作负载身份的服务帐户的更多信息,请参阅 Google Kubernetes Engine (GKE) → 使用工作负载身份Amazon EKS → 配置 Kubernetes 服务帐户以承担 IAM 角色,以及 Microsoft Azure → 将 Microsoft Entra 工作负载 ID 与 Azure Kubernetes Service (AKS) 结合使用

要将 Neo4j 备份 pod 配置为使用具有工作负载身份的 Kubernetes 服务帐户,请将 serviceAccountName 设置为要使用的服务帐户名称。对于 Azure 部署,您还需要将 azureStorageAccountName 参数设置为 Azure 存储帐户的名称,备份文件将上传到该存储帐户。例如

neo4j:
  image: "neo4j/helm-charts-backup"
  imageTag: "2025.05.0"
  jobSchedule: "* * * * *"
  successfulJobsHistoryLimit: 3
  failedJobsHistoryLimit: 1
  backoffLimit: 3

backup:
  bucketName: "my-bucket"
  databaseAdminServiceName:  "standalone-admin" #This is the Neo4j Admin Service name.
  database: "neo4j,system"
  cloudProvider: "gcp"
  secretName: ""
  secretKeyName: ""

consistencyCheck:
  enabled: true

serviceAccountName: "demo-service-account"
neo4j:
  image: "neo4j/helm-charts-backup"
  imageTag: "2025.05.0"
  jobSchedule: "* * * * *"
  successfulJobsHistoryLimit: 3
  failedJobsHistoryLimit: 1
  backoffLimit: 3

backup:
  bucketName: "my-bucket"
  databaseAdminServiceName:  "standalone-admin"
  database: "neo4j,system"
  cloudProvider: "aws"
  secretName: ""
  secretKeyName: ""

consistencyCheck:
  enabled: true

serviceAccountName: "demo-service-account"
neo4j:
  image: "neo4j/helm-charts-backup"
  imageTag: "2025.05.0"
  jobSchedule: "* * * * *"
  successfulJobsHistoryLimit: 3
  failedJobsHistoryLimit: 1
  backoffLimit: 3

backup:
  bucketName: "my-bucket"
  databaseAdminServiceName:  "standalone-admin"
  database: "neo4j,system"
  cloudProvider: "azure"
  azureStorageAccountName: "storageAccountName"

consistencyCheck:
  enabled: true

serviceAccountName: "demo-service-account"

默认创建的 /backups 挂载是一个 emptyDir 类型的卷。这意味着存储在该卷中的数据不是持久性的,并且在 pod 删除时会丢失。要将持久卷用于备份,请将以下部分添加到 backup-values.yaml 文件中

tempVolume:
  persistentVolumeClaim:
    claimName: backup-pvc

在安装 neo4j-admin Helm chart 之前,您需要创建持久卷和持久卷声明。有关更多信息,请参阅 卷挂载和持久卷

配置 S3 兼容存储端点

备份系统支持任何 S3 兼容存储服务。您可以使用 backup-values.yaml 文件中的以下参数配置 TLS 和非 TLS 端点

backup:
  # Specify your S3-compatible endpoint (e.g., https://s3.amazonaws.com or your custom endpoint)
  s3Endpoint: "https://s3.custom-provider.com"

  # Enable TLS for secure connections (default: false)
  s3EndpointTLS: true

  # Optional: Provide a base64-encoded CA certificate for custom certificate authorities
  s3CACert: "base64_encoded_ca_cert_data"

  # Optional: Skip TLS verification (not recommended for production)
  s3SkipVerify: false

以下是如何为不同 S3 兼容存储提供商配置备份系统的示例

AWS S3 标准端点
neo4j:
  image: "neo4j/helm-charts-backup"
  imageTag: "2025.05.0"
  jobSchedule: "* * * * *"
  successfulJobsHistoryLimit: 3
  failedJobsHistoryLimit: 1
  backoffLimit: 3

backup:
  bucketName: "my-bucket"
  databaseAdminServiceName: "standalone-admin"
  s3Endpoint: "https://s3.amazonaws.com"
  s3EndpointTLS: true
  database: "neo4j,system"
  cloudProvider: "aws"
  secretName: "awscreds"
  secretKeyName: "credentials"

consistencyCheck:
  enabled: true
带自签名证书的自定义 S3 兼容提供商
backup:
  bucketName: "my-bucket"
  s3Endpoint: "https://custom-s3.example.com"
  s3EndpointTLS: true
  s3CACert: "LS0tLS1CRUdJTiBDRVJUSUZJQ0FURS0t..."  # Base64-encoded CA cert
  cloudProvider: "aws"
  secretName: "awscreds"
  secretKeyName: "credentials"
传统 MinIO 支持
backup:
  bucketName: "my-bucket"
  databaseAdminServiceName: "standalone-admin"
  minioEndpoint: "http://minio.example.com:9000"  # Deprecated: Use s3Endpoint instead
  database: "neo4j,system"
  cloudProvider: "aws"
  secretName: "awscreds"
  secretKeyName: "credentials"
  • 使用 HTTPS 端点时,s3EndpointTLS 参数必须设置为 true

  • 使用自定义 CA 证书时,请在 s3CACert 参数中提供它们的 base64 编码。

  • s3SkipVerify 参数应仅在开发环境中使用。

  • 通过 minioEndpoint 参数提供的传统 MinIO 支持已弃用 - 请改用 s3Endpoint

准备将数据库备份到本地存储

您可以使用 neo4j/neo4j-admin Helm chart 将 Neo4j 数据库备份到本地存储。配置 backup-values.yaml 文件时,请将“cloudProvider”字段留空,并在 tempVolume 部分提供持久卷,以确保在 pod 删除时备份文件仍然存在。

在安装 neo4j-admin Helm chart 之前,您需要创建持久卷和持久卷声明。有关更多信息,请参阅 卷挂载和持久卷

例如

neo4j:
  image: "neo4j/helm-charts-backup"
  imageTag: "2025.05.0"
  jobSchedule: "* * * * *"
  successfulJobsHistoryLimit: 3
  failedJobsHistoryLimit: 1
  backoffLimit: 3

backup:
  bucketName: "my-bucket"
  databaseAdminServiceName:  "standalone-admin"
  database: "neo4j,system"
  cloudProvider: ""

consistencyCheck:
  enabled: true

tempVolume:
  persistentVolumeClaim:
    claimName: backup-pvc

备份参数

要查看 Helm chart 上可配置的选项,请使用 helm show values 命令和 Helm chart neo4j/neo4j-admin
neo4j/neo4j-admin Helm chart 还支持使用 nodeSelector 标签以及亲和/反亲和规则或容忍度将 Neo4j pod 分配给特定节点。有关更多信息,请参阅 将备份 pod 分配给特定节点 以及 Kubernetes 官方文档中的 亲和与反亲和 规则和 污点与容忍

例如

helm show values neo4j/neo4j-admin
## @param nameOverride String to partially override common.names.fullname
nameOverride: ""
## @param fullnameOverride String to fully override common.names.fullname
fullnameOverride: ""
# disableLookups will disable all the lookups done in the helm charts
# This should be set to true when using ArgoCD since ArgoCD uses helm template and the helm lookups will fail
# You can enable this when executing helm commands with --dry-run command
disableLookups: false

neo4j:
  image: "neo4j/helm-charts-backup"
  imageTag: "2025.05.0"
  podLabels: {}
#    app: "demo"
#    acac: "dcdddc"
  podAnnotations: {}
#    ssdvvs: "svvvsvs"
#    vfsvswef: "vcfvgb"
  # define the backup job schedule . default is * * * * *
  jobSchedule: ""
  # default is 3
  successfulJobsHistoryLimit:
  # default is 1
  failedJobsHistoryLimit:
  # default is 3
  backoffLimit:
  #add labels if required
  labels: {}

backup:
  # Ensure the bucket is already existing in the respective cloud provider
  # In case of azure the bucket is the container name in the storage account
  # bucket: azure-storage-container
  bucketName: ""
  # Specify multiple backup endpoints as comma-separated string
  # e.g. "10.3.3.2:6362,10.3.3.3:6362,10.3.3.4:6362"
  databaseBackupEndpoints: ""
  #ex: standalone-admin.default.svc.cluster.local:6362
  # admin service name -  standalone-admin
  # namespace - default
  # cluster domain - cluster.local
  # port - 6362

  #ex: 10.3.3.2:6362
  # admin service ip - 10.3.3.2
  # port - 6362

  databaseAdminServiceName: ""
  databaseAdminServiceIP: ""
  #default name is 'default'
  databaseNamespace: ""
  #default port is 6362
  databaseBackupPort: ""
  #default value is cluster.local
  databaseClusterDomain: ""
  # specify S3-compatible endpoint (e.g., http://s3.amazonaws.com or your custom S3 endpoint)
  # This can be any S3-compatible endpoint including AWS S3, MinIO, or other S3-compatible storage services
  # For TLS endpoints (https), set s3EndpointTLS to true
  s3Endpoint: ""
  # Enable TLS for S3 endpoint (default: false)
  s3EndpointTLS: false
  # Optional: Base64-encoded CA certificate for S3 endpoint TLS verification
  # Only needed for self-signed certificates or private CA
  s3CACert: ""
  # Optional: Skip TLS verification (not recommended for production)
  s3SkipVerify: false
  #name of the database to backup ex: neo4j or neo4j,system (You can provide command separated database names)
  # In case of comma separated databases failure of any single database will lead to failure of complete operation
  database: ""
  # cloudProvider can be either gcp, aws, or azure
  # if cloudProvider is empty then the backup will be done to the /backups mount.
  # the /backups mount can point to a persistentVolume based on the definition set in tempVolume
  cloudProvider: ""



  # name of the kubernetes secret containing the respective cloud provider credentials
  # Ensure you have read,write access to the mentioned bucket
  # For AWS :
  # add the below in a file and create a secret via
  # 'kubectl create secret generic awscred --from-file=credentials=/demo/awscredentials'

  #  [ default ]
  #  region = us-east-1
  #  aws_access_key_id = XXXXX
  #  aws_secret_access_key = XXXX

  # For AZURE :
  # add the storage account name and key in below format in a file create a secret via
  # 'kubectl create secret generic azurecred --from-file=credentials=/demo/azurecredentials'

  #  AZURE_STORAGE_ACCOUNT_NAME=XXXX
  #  AZURE_STORAGE_ACCOUNT_KEY=XXXX

  # For GCP :
  # create the secret via the gcp service account json key file.
  # ex: 'kubectl create secret generic gcpcred --from-file=credentials=/demo/gcpcreds.json'
  secretName: ""
  # provide the keyname used in the above secret
  secretKeyName: ""
  # provide the azure storage account name
  # this to be provided when you are using workload identity integration for azure
  azureStorageAccountName: ""
  #setting this to true will not delete the backup files generated at the /backup mount
  keepBackupFiles: true

  #Below are all neo4j-admin database backup flags / options
  #To know more about the flags read here : https://neo4j.ac.cn/docs/operations-manual/current/backup-restore/online-backup/
  pageCache: ""
  includeMetadata: "all"
  type: "AUTO"
  keepFailed: false
  parallelRecovery: false
  verbose: true
  heapSize: ""

  # https://neo4j.ac.cn/docs/operations-manual/current/backup-restore/aggregate/
  # Performs aggregate backup. If enabled, NORMAL BACKUP WILL NOT BE DONE only aggregate backup
  # fromPath supports only s3 or local mount. For s3 , please set cloudProvider to aws and use either serviceAccount or creds
  aggregate:
    enabled: false
    verbose: true
    keepOldBackup: false
    parallelRecovery: false
    # Only AWS S3 or local mount paths are supported
    # For S3 provide the complete path , Ex: s3://bucket1/bucket2
    fromPath: ""
    # database name to aggregate. Can contain * and ? for globbing.
    database: ""
    # Optional temporary directory for aggregation process
    # If not specified, will use the backup directory
    tempDir: ""

#Below are all neo4j-admin database check flags / options
#To know more about the flags read here : https://neo4j.ac.cn/docs/operations-manual/current/tools/neo4j-admin/consistency-checker/
consistencyCheck:
  enable: false
  checkIndexes: true
  checkGraph: true
  checkCounts: true
  checkPropertyOwners: true
  #The database name for which consistency check needs to be done.
  #Defaults to the backup.database values if left empty
  #The database name here should match with one of the database names present in backup.database. If not , the consistency check will be ignored
  database: ""
  maxOffHeapMemory: ""
  threads: ""
  verbose: true

# Set to name of an existing Service Account to use if desired
# Follow the following links for setting up a service account with workload identity
# Azure - https://learn.microsoft.com/en-us/azure/aks/workload-identity-overview?tabs=go
# GCP - https://cloud.google.com/kubernetes-engine/docs/how-to/workload-identity
# AWS - https://docs.aws.amazon.com/eks/latest/userguide/associate-service-account-role.html
serviceAccountName: ""

# Volume to use as temporary storage for files before they are uploaded to cloud. For large databases local storage may not have sufficient space.
# In that case set an ephemeral or persistent volume with sufficient space here
# The chart defaults to an emptyDir, use this to overwrite default behavior
#tempVolume:
#  persistentVolumeClaim:
#    claimName: backup-pvc

# securityContext defines privilege and access control settings for a Pod. Making sure that we don't run Neo4j as root user.
securityContext:
  runAsNonRoot: true
  runAsUser: 7474
  runAsGroup: 7474
  fsGroup: 7474
  fsGroupChangePolicy: "Always"

containerSecurityContext:
  runAsNonRoot: true
  runAsUser: 7474
  runAsGroup: 7474
  readOnlyRootFilesystem: false
  allowPrivilegeEscalation: false
  capabilities:
    drop: ["ALL"]
# default ephemeral storage of backup container
resources:
  requests:
    ephemeralStorage: "4Gi"
    cpu: ""
    memory: ""
  limits:
    ephemeralStorage: "5Gi"
    cpu: ""
    memory: ""

# nodeSelector labels
# please ensure the respective labels are present on one of nodes or else helm charts will throw an error
nodeSelector: {}
#  label1: "true"
#  label2: "value1"

# set backup pod affinity
affinity: {}
#  podAffinity:
#    requiredDuringSchedulingIgnoredDuringExecution:
#      - labelSelector:
#          matchExpressions:
#            - key: security
#              operator: In
#              values:
#                - S1
#        topologyKey: topology.kubernetes.io/zone
#  podAntiAffinity:
#    preferredDuringSchedulingIgnoredDuringExecution:
#      - weight: 100
#        podAffinityTerm:
#          labelSelector:
#            matchExpressions:
#              - key: security
#                operator: In
#                values:
#                  - S2
#          topologyKey: topology.kubernetes.io/zone

#Add tolerations to the Neo4j pod
tolerations: []
#  - key: "key1"
#    operator: "Equal"
#    value: "value1"
#    effect: "NoSchedule"
#  - key: "key2"
#    operator: "Equal"
#    value: "value2"
#    effect: "NoSchedule"

备份您的数据库

要备份您的数据库,请使用配置好的 backup-values.yaml 文件安装 neo4j-admin Helm chart。

  1. 使用 backup-values.yaml 文件安装 neo4j-admin Helm chart

    helm install backup-name neo4j-admin -f /path/to/your/backup-values.yaml

    neo4j/neo4j-admin Helm chart 安装了一个 cronjob,它根据作业计划启动一个 pod。此 pod 执行一个或多个数据库的备份、备份文件的一致性检查,并将它们上传到云提供商存储桶。

  2. 使用 kubectl logs pod/<neo4j-backup-pod-name> 监控备份 pod 日志,以检查备份进度。

  3. 检查备份文件和一致性检查报告是否已上传到云提供商存储桶或本地存储。

聚合数据库备份链

聚合备份命令将备份链转换为单个备份文件。当您有一个想要恢复到不同集群的备份链,或者想要归档备份链时,这非常有用。有关聚合备份链操作的优点、语法和可用选项的更多信息,请参阅 聚合数据库备份链

从 5.26 LTS 开始,neo4j-admin Helm chart 支持一个可选的临时目录,供聚合过程使用,而不是备份工作目录。当备份链的大小大于 pod 的临时存储时,这尤其有用。为避免因磁盘空间不足导致备份聚合作业失败,您可以将 tempDir 参数设置为具有足够空间来容纳备份文件的持久卷声明。

neo4j-admin Helm chart 支持聚合存储在 AWS S3 存储桶或本地挂载中的备份链。如果启用,将不执行常规备份,只执行聚合备份。

  1. 要聚合存储在 AWS S3 存储桶或本地挂载中的备份链,您需要在 backup-values.yaml 文件中提供以下信息

    如果您的备份链存储在 AWS S3 上,您需要将 cloudProvider 设置为 aws,并使用 credsserviceAccount 连接到您的 AWS S3 存储桶。例如

    使用 awscreds secret 连接到您的 AWS S3 存储桶
    neo4j:
      image: "neo4j/helm-charts-backup"
      imageTag: "2025.05.0"
      jobSchedule: "* * * * *"
      successfulJobsHistoryLimit: 3
      failedJobsHistoryLimit: 1
      backoffLimit: 3
    
    backup:
    
      cloudProvider: "aws"
      secretName: "awscreds"
      secretKeyName: "credentials"
    
      aggregate:
        enabled: true
        verbose: false
        keepOldBackup: false
        parallelRecovery: false
        fromPath: "s3://bucket1/bucket2"
        # Database name to aggregate. Can contain * and ? for globbing.
        database: "neo4j"
        # Optional temporary directory for aggregation process
        # If not specified, will use the backup directory
        tempDir: "/custom/temp/dir"
    
    resources:
      requests:
        ephemeralStorage: "4Gi"
      limits:
        ephemeralStorage: "5Gi"
    使用 serviceAccount 连接到您的 AWS S3 存储桶
    neo4j:
      image: "neo4j/helm-charts-backup"
      imageTag: "2025.05.0"
      jobSchedule: "* * * * *"
      successfulJobsHistoryLimit: 3
      failedJobsHistoryLimit: 1
      backoffLimit: 3
    
    backup:
    
        cloudProvider: "aws"
    
        aggregate:
          enabled: true
          verbose: false
          keepOldBackup: false
          parallelRecovery: false
          fromPath: "s3://bucket1/bucket2"
          # Database name to aggregate. Can contain * and ? for globbing.
          database: "neo4j"
          # Optional temporary directory for aggregation process
          # If not specified, will use the backup directory
          tempDir: "/custom/temp/dir"
    
    #The service account must already exist in your cloud provider account and have the necessary permissions to manage your S3 bucket, as well as to download and upload files. See the example policy below.
    #{
    #   "Version": "2012-10-17",
    #    "Id": "Neo4jBackupAggregatePolicy",
    #    "Statement": [
    #        {
    #            "Sid": "Neo4jBackupAggregateStatement",
    #            "Effect": "Allow",
    #            "Action": [
    #                "s3:ListBucket",
    #                "s3:GetObject",
    #                "s3:PutObject",
    #                "s3:DeleteObject"
    #            ],
    #            "Resource": [
    #                "arn:aws:s3:::mybucket/*",
    #                "arn:aws:s3:::mybucket"
    #            ]
    #        }
    #    ]
    #}
    serviceAccountName: "my-service-account"
    
    resources:
      requests:
        ephemeralStorage: "4Gi"
      limits:
        ephemeralStorage: "5Gi"
    neo4j:
      image: "neo4j/helm-charts-backup"
      imageTag: "2025.05.0"
      successfulJobsHistoryLimit: 1
      failedJobsHistoryLimit: 1
      backoffLimit: 1
    
    backup:
    
      aggregate:
        enabled: true
        verbose: false
        keepOldBackup: false
        parallelRecovery: false
        fromPath: "/backups"
        # Database name to aggregate. Can contain * and ? for globbing.
        database: "neo4j"
        # Optional temporary directory for aggregation process
        # If not specified, will use the backup directory
        tempDir: "/custom/temp/dir"
    
    tempVolume:
      persistentVolumeClaim:
        claimName: aggregate-pv-pvc
    
    resources:
      requests:
        ephemeralStorage: "4Gi"
      limits:
        ephemeralStorage: "5Gi"
  2. 使用配置好的 backup-values.yaml 文件安装 neo4j-admin Helm chart

    helm install backup-name neo4j-admin -f /path/to/your/backup-values.yaml
  3. 使用 kubectl logs pod/<neo4j-aggregate-backup-pod-name> 监控 pod 日志,以检查聚合备份操作的进度。

  4. 验证聚合备份文件是否已替换云提供商存储桶或本地存储中的备份链。

恢复单个数据库

要恢复单个离线数据库或数据库备份,您首先需要删除要替换的数据库,除非您想将备份作为附加数据库恢复到您的 DBMS 中。然后,使用 neo4j-admin 的 restore 命令恢复数据库备份。最后,使用 Cypher 命令 CREATE DATABASE namesystem 数据库中创建恢复的数据库。

删除要替换的数据库

在恢复数据库备份之前,您必须使用 Cypher 命令 DROP DATABASE name 针对 system 数据库删除要用该备份替换的数据库。如果您想将备份作为附加数据库恢复到您的 DBMS 中,则可以继续下一节。

对于 Neo4j 集群部署,您只需在一个集群服务器上运行 Cypher 命令 DROP DATABASE name。该命令将自动从那里路由到其他集群成员。

  1. 连接到 Neo4j DBMS

    kubectl exec -it <release-name>-0 -- bash
  2. 使用 cypher-shell 连接到 system 数据库

    cypher-shell -u neo4j -p <password> -d system
  3. 删除要用备份替换的数据库

    DROP DATABASE neo4j;
  4. 退出 Cypher Shell 命令行控制台

    :exit;

恢复数据库备份

您使用 neo4j-admin database restore 命令恢复数据库备份,然后使用 Cypher 命令 CREATE DATABASE namesystem 数据库中创建恢复的数据库。有关命令语法、选项和用法的更多信息,请参阅 恢复数据库备份

对于 Neo4j 集群部署,在每个集群服务器上恢复数据库备份。

  1. 运行 neo4j-admin database restore 命令以恢复数据库备份

    neo4j-admin database restore neo4j --from-path=/backups/neo4j --expand-commands
  2. 使用 cypher-shell 连接到 system 数据库

    cypher-shell -u neo4j -p <password> -d system
  3. 创建 neo4j 数据库。

    对于 Neo4j 集群部署,您只需在一个集群服务器上运行 Cypher 命令 CREATE DATABASE name

    CREATE DATABASE neo4j;
  4. 在浏览器中打开 http://<external-ip>:7474/browser/ 并检查所有数据是否已成功恢复。

  5. neo4j 数据库执行 Cypher 命令,例如

    MATCH (n) RETURN n

    如果您使用 --include-metadata 选项备份了数据库,您可以手动恢复用户和角色元数据。有关更多信息,请参阅 恢复数据库备份 → 示例

要恢复 system 数据库,请按照 转储和加载数据库(离线)中描述的步骤操作。