Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

(Error in workloop { MongoError: failed to connect to server [127.0.0.1:27017]) - After scaling up #117

Open
davorceman opened this issue Apr 29, 2020 · 4 comments

Comments

@davorceman
Copy link

davorceman commented Apr 29, 2020

I created cluster with two replicas. And at the end it works correctly.
Now I increment replicas from 2 to 3 and I'm getting only on that one sidecar this error.

> [email protected] start /opt/cvallance/mongo-k8s-sidecar
> forever src/index.js

warn: --minUptime not set. Defaulting to: 1000ms
warn: --spinSleepTime not set. Your script will exit if it does not stay up for at least 1000ms
Using mongo port: 27017
Starting up mongo-k8s-sidecar
The cluster domain 'cluster.local' was successfully verified.
Error in workloop { MongoError: failed to connect to server [127.0.0.1:27017] on first connect [MongoError: connect ECONNREFUSED 127.0.0.1:27017]
at Pool. (/opt/cvallance/mongo-k8s-sidecar/node_modules/mongodb-core/lib/topologies/server.js:336:35)
at Pool.emit (events.js:182:13)
at Connection. (/opt/cvallance/mongo-k8s-sidecar/node_modules/mongodb-core/lib/connection/pool.js:280:12)
at Object.onceWrapper (events.js:273:13)
at Connection.emit (events.js:182:13)
at Socket. (/opt/cvallance/mongo-k8s-sidecar/node_modules/mongodb-core/lib/connection/connection.js:189:49)
at Object.onceWrapper (events.js:273:13)
at Socket.emit (events.js:182:13)
at emitErrorNT (internal/streams/destroy.js:82:8)
at emitErrorAndCloseNT (internal/streams/destroy.js:50:3)
name: 'MongoError',
message:
'failed to connect to server [127.0.0.1:27017] on first connect [MongoError: connect ECONNREFUSED 127.0.0.1:27017]' }

But when I check sidecar in mongo-0 I see that this third is also added, and cluster working.
I connected to it and check, data is replicated on it

Addresses to add: [ 'mongo-2.mongo.mongo.svc.cluster.local:27017' ]
Addresses to remove: []
replSetReconfig { _id: 'rs1',
version: 4,
protocolVersion: 1,
members:
[ { _id: 0,
host: 'mongo-0.mongo.mongo.svc.cluster.local:27017',
arbiterOnly: false,
buildIndexes: true,
hidden: false,
priority: 1,
tags: {},
slaveDelay: 0,
votes: 1 },
{ _id: 1,
host: 'mongo-1.mongo.mongo.svc.cluster.local:27017',
arbiterOnly: false,
buildIndexes: true,
hidden: false,
priority: 1,
tags: {},
slaveDelay: 0,
votes: 1 },
{ _id: 2,
host: 'mongo-2.mongo.mongo.svc.cluster.local:27017' } ],
settings:
{ chainingAllowed: true,
heartbeatIntervalMillis: 2000,
heartbeatTimeoutSecs: 10,
electionTimeoutMillis: 10000,
catchUpTimeoutMillis: 60000,
getLastErrorModes: {},
getLastErrorDefaults: { w: 1, wtimeout: 0 },
replicaSetId: 5ea8507d4645a422737db5e2 } }

What this error means? I struggled with it these days when I tried to find a way to set it.
This is my configuration

apiVersion: rbac.authorization.k8s.io/v1beta1
kind: ClusterRoleBinding
metadata:
  name: default-view
roleRef:
  apiGroup: rbac.authorization.k8s.io
  kind: ClusterRole
  name: view
subjects:
  - kind: ServiceAccount
    name: default
    namespace: {{ .Values.Namespace }}

---
apiVersion: v1
kind: Service
metadata:
  name: mongo
  namespace: {{ .Values.Namespace }}
  labels:
    role: mongo
    environment: {{ .Values.Environment }}
spec:
  ports:
  - port: {{ .Values.Mongo.Port }}
    targetPort: {{ .Values.Mongo.Port }}
  clusterIP: None
  selector:
    role: mongo
    environment: {{ .Values.Environment }}

---
apiVersion: apps/v1beta1
kind: StatefulSet
metadata:
  name: mongo
  namespace: {{ .Values.Namespace }}
spec:
  serviceName: mongo
  replicas: {{ .Values.Replicas}}
  template:
    metadata:
      labels:
        role: mongo
        environment: {{ .Values.Environment }}
    spec:
      terminationGracePeriodSeconds: 10
      containers:
        - name: mongo
          image: {{ .Values.Mongo.Image }}
          command:
            - mongod
            - "--replSet"
            - rs1
            - "--bind_ip"
            - 0.0.0.0
            - "--smallfiles"
            - "--noprealloc"
          ports:
            - containerPort: {{ .Values.Mongo.Port }}
              protocol: TCP
          volumeMounts:
            - name: mongo-persistent-storage
              mountPath: /data/db
        - name: mongo-sidecar
          image: cvallance/mongo-k8s-sidecar
          env:
            - name: MONGO_SIDECAR_POD_LABELS
              value: "role=mongo,environment={{ .Values.Environment }}"
            - name: KUBERNETES_MONGO_SERVICE_NAME
              value: "mongo"
  volumeClaimTemplates:
  - metadata:
      name: mongo-persistent-storage
      annotations:
        volume.beta.kubernetes.io/storage-class: aws-efs
    spec:
      accessModes: [ "ReadWriteOnce" ]
      resources:
        requests:
          storage: 10Gi

mongo image is 3.4 because I was not able to set it to the newest.
k8s is 1.14 on EKS

@igor9silva
Copy link

Hi @davorceman, I'm facing similar problem. Did you ever found a solution?

@xu756
Copy link

xu756 commented Feb 24, 2023

Hi @davorceman, I'm facing similar problem. Did you ever found a solution?

me too

@davorceman
Copy link
Author

Hi @igor9silva @xu756

idk... If I remember corectly It worked even with this error.
I did not found a solution since we went to DocumentDB just after.

@ARu1ToT
Copy link

ARu1ToT commented May 17, 2023

Hi @davorceman ,I'm facing the same problem with you.But currently our mongodb working well.Did you found a way to solve it currently?Thanks.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

4 participants