You are not logged in.
I just installed Arch for the first time and I am afraid i screwed some config file. I installed docker enabled and started the service, all fine.
Then I tried
docker run -p 9000:80 \
-e "PGADMIN_DEFAULT_EMAIL=user@domain.com" \
-e "PGADMIN_DEFAULT_PASSWORD=SuperSecret" \
dpage/pgadmin4
When I go to the browser and try to hit localhost:9000, it is left hanging.
Also, if I turn off the wifi adapter, it will immediately return that is not reachable
I am running Arch with Gnome on top.
Thanks in advance
Offline
Well, I tried you command, and pgadmin came up successfully after about thirty seconds in my browser.
You don't mention if your docker command is successful (commonly, people forget to add their own user to the docker group.) This was my output:
a8555ad5f272: Pull complete
f01c552a0315: Pull complete
Digest: sha256:a5a656e1d5fd6c863c45df3c4f458d8cca3b2186c75119c5a7985922c5ab7dd5
Status: Downloaded newer image for dpage/pgadmin4:latest
NOTE: Configuring authentication for SERVER mode.
[2019-04-21 08:28:02 +0000] [1] [INFO] Starting gunicorn 19.9.0
[2019-04-21 08:28:02 +0000] [1] [INFO] Listening at: http://[::]:80 (1)
[2019-04-21 08:28:02 +0000] [1] [INFO] Using worker: threads
[2019-04-21 08:28:02 +0000] [79] [INFO] Booting worker with pid: 79
Remember that your /etc/hosts file should look something like:
127.0.0.1 localhost
::1 localhost
127.0.1.1 localhost.localdomain yourcomputershostname
Offline
After I changed the host file, it worked.
I tried to start a docker-compose, and it stopped working
File docker-compose.yml
version: "3"
services:
postgres:
container_name: hyp_postgres
image: postgres
restart: always
environment:
POSTGRES_USER: "${DB_USERNAME}"
POSTGRES_PASSWORD: "${DB_PASSWORD}"
ports:
- "5432:5432"
networks:
- general_network
pgadmin:
container_name: hyp_pgadmin
image: dpage/pgadmin4
restart: always
environment:
PGADMIN_DEFAULT_EMAIL: "user@hyp.pt"
PGADMIN_DEFAULT_PASSWORD: "secret"
ports:
- "3001:80"
depends_on:
- postgres
networks:
- general_network
elasticsearch:
container_name: "hyp_elasticsearch"
image: docker.elastic.co/elasticsearch/elasticsearch:6.4.3
restart: always
environment:
- "discovery.type=single-node"
ports:
- "9200:9200"
networks:
- general_network
kibana:
container_name: "hyp_kibana"
image: docker.elastic.co/kibana/kibana-oss:6.4.3
restart: always
environment:
SERVER_NAME: localhost
ELASTICSEARCH_URL: http://elasticsearch:9200
ports:
- "3002:5601"
depends_on:
- elasticsearch
networks:
- general_network
mongodb:
container_name: "hyp_mongodb"
image: "mongo:latest"
restart: always
ports:
- "27017:27017"
networks:
- general_network
redis:
container_name: "hyp_redis"
image: "redis:latest"
restart: always
ports:
- "6379:6379"
networks:
- general_network
networks:
general_network:
driver: bridge
If I "docker-compose up", it outputs
Attaching to hyp_mongodb, hyp_redis, hyp_postgres, hyp_elasticsearch, hyp_pgadmin, hyp_kibana
[36mhyp_mongodb |[0m 2019-04-21T13:49:32.147+0000 I CONTROL [main] Automatically disabling TLS 1.0, to force-enable TLS 1.0 specify --sslDisabledProtocols 'none'
[33mhyp_redis |[0m 1:C 21 Apr 2019 13:49:32.100 # oO0OoO0OoO0Oo Redis is starting oO0OoO0OoO0Oo
[33mhyp_redis |[0m 1:C 21 Apr 2019 13:49:32.100 # Redis version=5.0.4, bits=64, commit=00000000, modified=0, pid=1, just started
[33mhyp_redis |[0m 1:C 21 Apr 2019 13:49:32.100 # Warning: no config file specified, using the default config. In order to specify a config file use redis-server /path/to/redis.conf
[36mhyp_mongodb |[0m 2019-04-21T13:49:32.151+0000 I CONTROL [initandlisten] MongoDB starting : pid=1 port=27017 dbpath=/data/db 64-bit host=7b37175d03c0
[36mhyp_mongodb |[0m 2019-04-21T13:49:32.151+0000 I CONTROL [initandlisten] db version v4.0.9
[36mhyp_mongodb |[0m 2019-04-21T13:49:32.151+0000 I CONTROL [initandlisten] git version: fc525e2d9b0e4bceff5c2201457e564362909765
[36mhyp_mongodb |[0m 2019-04-21T13:49:32.151+0000 I CONTROL [initandlisten] OpenSSL version: OpenSSL 1.0.2g 1 Mar 2016
[36mhyp_mongodb |[0m 2019-04-21T13:49:32.151+0000 I CONTROL [initandlisten] allocator: tcmalloc
[36mhyp_mongodb |[0m 2019-04-21T13:49:32.151+0000 I CONTROL [initandlisten] modules: none
[36mhyp_mongodb |[0m 2019-04-21T13:49:32.151+0000 I CONTROL [initandlisten] build environment:
[36mhyp_mongodb |[0m 2019-04-21T13:49:32.151+0000 I CONTROL [initandlisten] distmod: ubuntu1604
[36mhyp_mongodb |[0m 2019-04-21T13:49:32.151+0000 I CONTROL [initandlisten] distarch: x86_64
[36mhyp_mongodb |[0m 2019-04-21T13:49:32.151+0000 I CONTROL [initandlisten] target_arch: x86_64
[36mhyp_mongodb |[0m 2019-04-21T13:49:32.151+0000 I CONTROL [initandlisten] options: { net: { bindIpAll: true } }
[33mhyp_redis |[0m 1:M 21 Apr 2019 13:49:32.102 * Running mode=standalone, port=6379.
[36mhyp_mongodb |[0m 2019-04-21T13:49:32.151+0000 I STORAGE [initandlisten] Detected data files in /data/db created by the 'wiredTiger' storage engine, so setting the active storage engine to 'wiredTiger'.
[36mhyp_mongodb |[0m 2019-04-21T13:49:32.151+0000 I STORAGE [initandlisten]
[36mhyp_mongodb |[0m 2019-04-21T13:49:32.151+0000 I STORAGE [initandlisten] ** WARNING: Using the XFS filesystem is strongly recommended with the WiredTiger storage engine
[35mhyp_elasticsearch |[0m OpenJDK 64-Bit Server VM warning: Option UseConcMarkSweepGC was deprecated in version 9.0 and will likely be removed in a future release.
[33mhyp_redis |[0m 1:M 21 Apr 2019 13:49:32.102 # WARNING: The TCP backlog setting of 511 cannot be enforced because /proc/sys/net/core/somaxconn is set to the lower value of 128.
[33mhyp_redis |[0m 1:M 21 Apr 2019 13:49:32.102 # Server initialized
[32mhyp_postgres |[0m 2019-04-21 13:49:32.273 UTC [1] LOG: listening on IPv4 address "0.0.0.0", port 5432
[32mhyp_postgres |[0m 2019-04-21 13:49:32.273 UTC [1] LOG: listening on IPv6 address "::", port 5432
[36mhyp_mongodb |[0m 2019-04-21T13:49:32.151+0000 I STORAGE [initandlisten] ** See [url]http://dochub.mongodb.org/core/prodnotes-filesystem[/url]
[36mhyp_mongodb |[0m 2019-04-21T13:49:32.151+0000 I STORAGE [initandlisten] wiredtiger_open config: create,cache_size=3346M,session_max=20000,eviction=(threads_min=4,threads_max=4),config_base=false,statistics=(fast),log=(enabled=true,archive=true,path=journal,compressor=snappy),file_manager=(close_idle_time=100000),statistics_log=(wait=0),verbose=(recovery_progress),
[33mhyp_redis |[0m 1:M 21 Apr 2019 13:49:32.102 # WARNING overcommit_memory is set to 0! Background save may fail under low memory condition. To fix this issue add 'vm.overcommit_memory = 1' to /etc/sysctl.conf and then reboot or run the command 'sysctl vm.overcommit_memory=1' for this to take effect.
[33mhyp_redis |[0m 1:M 21 Apr 2019 13:49:32.102 # WARNING you have Transparent Huge Pages (THP) support enabled in your kernel. This will create latency and memory usage issues with Redis. To fix this issue run the command 'echo never > /sys/kernel/mm/transparent_hugepage/enabled' as root, and add it to your /etc/rc.local in order to retain the setting after a reboot. Redis must be restarted after THP is disabled.
[33mhyp_redis |[0m 1:M 21 Apr 2019 13:49:32.102 * DB loaded from disk: 0.000 seconds
[33mhyp_redis |[0m 1:M 21 Apr 2019 13:49:32.102 * Ready to accept connections
[32mhyp_postgres |[0m 2019-04-21 13:49:32.281 UTC [1] LOG: listening on Unix socket "/var/run/postgresql/.s.PGSQL.5432"
[32mhyp_postgres |[0m 2019-04-21 13:49:32.304 UTC [23] LOG: database system was interrupted; last known up at 2019-04-21 13:49:15 UTC
[32mhyp_postgres |[0m 2019-04-21 13:49:32.577 UTC [23] LOG: database system was not properly shut down; automatic recovery in progress
[32mhyp_postgres |[0m 2019-04-21 13:49:32.581 UTC [23] LOG: invalid record length at 0/1652B40: wanted 24, got 0
[32mhyp_postgres |[0m 2019-04-21 13:49:32.581 UTC [23] LOG: redo is not required
[32mhyp_postgres |[0m 2019-04-21 13:49:32.603 UTC [1] LOG: database system is ready to accept connections
[36mhyp_mongodb |[0m 2019-04-21T13:49:33.138+0000 I STORAGE [initandlisten] WiredTiger message [1555854573:138269][1:0x7fcf73ad9a80], txn-recover: Main recovery loop: starting at 6/15232 to 8/256
[36mhyp_mongodb |[0m 2019-04-21T13:49:33.366+0000 I STORAGE [initandlisten] WiredTiger message [1555854573:366384][1:0x7fcf73ad9a80], txn-recover: Recovering log 6 through 8
[36mhyp_mongodb |[0m 2019-04-21T13:49:33.452+0000 I STORAGE [initandlisten] WiredTiger message [1555854573:452634][1:0x7fcf73ad9a80], txn-recover: Recovering log 7 through 8
[36mhyp_mongodb |[0m 2019-04-21T13:49:33.531+0000 I STORAGE [initandlisten] WiredTiger message [1555854573:531607][1:0x7fcf73ad9a80], txn-recover: Recovering log 8 through 8
[36mhyp_mongodb |[0m 2019-04-21T13:49:33.608+0000 I STORAGE [initandlisten] WiredTiger message [1555854573:608876][1:0x7fcf73ad9a80], txn-recover: Set global recovery timestamp: 0
[36mhyp_mongodb |[0m 2019-04-21T13:49:33.638+0000 I RECOVERY [initandlisten] WiredTiger recoveryTimestamp. Ts: Timestamp(0, 0)
[36mhyp_mongodb |[0m 2019-04-21T13:49:33.651+0000 I CONTROL [initandlisten]
[36mhyp_mongodb |[0m 2019-04-21T13:49:33.651+0000 I CONTROL [initandlisten] ** WARNING: Access control is not enabled for the database.
[36mhyp_mongodb |[0m 2019-04-21T13:49:33.651+0000 I CONTROL [initandlisten] ** Read and write access to data and configuration is unrestricted.
[36mhyp_mongodb |[0m 2019-04-21T13:49:33.651+0000 I CONTROL [initandlisten]
[36mhyp_mongodb |[0m 2019-04-21T13:49:33.681+0000 I FTDC [initandlisten] Initializing full-time diagnostic data capture with directory '/data/db/diagnostic.data'
[36mhyp_mongodb |[0m 2019-04-21T13:49:33.683+0000 I NETWORK [initandlisten] waiting for connections on port 27017
[31mhyp_pgadmin |[0m [2019-04-21 13:49:34 +0000] [1] [INFO] Starting gunicorn 19.9.0
[31mhyp_pgadmin |[0m [2019-04-21 13:49:34 +0000] [1] [INFO] Listening at: http://[::]:80 (1)
[31mhyp_pgadmin |[0m [2019-04-21 13:49:34 +0000] [1] [INFO] Using worker: threads
[31mhyp_pgadmin |[0m [2019-04-21 13:49:34 +0000] [76] [INFO] Booting worker with pid: 76
[35mhyp_elasticsearch |[0m [2019-04-21T13:49:35,064][INFO ][o.e.n.Node ] [] initializing ...
[35mhyp_elasticsearch |[0m [2019-04-21T13:49:35,150][INFO ][o.e.e.NodeEnvironment ] [99WePjl] using [1] data paths, mounts [[/ (overlay)]], net usable_space [133gb], net total_space [180gb], types [overlay]
[35mhyp_elasticsearch |[0m [2019-04-21T13:49:35,151][INFO ][o.e.e.NodeEnvironment ] [99WePjl] heap size [989.8mb], compressed ordinary object pointers [true]
[35mhyp_elasticsearch |[0m [2019-04-21T13:49:35,153][INFO ][o.e.n.Node ] [99WePjl] node name derived from node ID [99WePjlaTTuJmLBzn2CYgQ]; set [node.name] to override
[35mhyp_elasticsearch |[0m [2019-04-21T13:49:35,154][INFO ][o.e.n.Node ] [99WePjl] version[6.4.3], pid[1], build[default/tar/fe40335/2018-10-30T23:17:19.084789Z], OS[Linux/5.0.7-arch1-1-ARCH/amd64], JVM["Oracle Corporation"/OpenJDK 64-Bit Server VM/10.0.2/10.0.2+13]
[35mhyp_elasticsearch |[0m [2019-04-21T13:49:35,154][INFO ][o.e.n.Node ] [99WePjl] JVM arguments [-Xms1g, -Xmx1g, -XX:+UseConcMarkSweepGC, -XX:CMSInitiatingOccupancyFraction=75, -XX:+UseCMSInitiatingOccupancyOnly, -XX:+AlwaysPreTouch, -Xss1m, -Djava.awt.headless=true, -Dfile.encoding=UTF-8, -Djna.nosys=true, -XX:-OmitStackTraceInFastThrow, -Dio.netty.noUnsafe=true, -Dio.netty.noKeySetOptimization=true, -Dio.netty.recycler.maxCapacityPerThread=0, -Dlog4j.shutdownHookEnabled=false, -Dlog4j2.disable.jmx=true, -Djava.io.tmpdir=/tmp/elasticsearch.lnhfOqP0, -XX:+HeapDumpOnOutOfMemoryError, -XX:HeapDumpPath=data, -XX:ErrorFile=logs/hs_err_pid%p.log, -Xlog:gc*,gc+age=trace,safepoint:file=logs/gc.log:utctime,pid,tags:filecount=32,filesize=64m, -Djava.locale.providers=COMPAT, -XX:UseAVX=2, -Des.cgroups.hierarchy.override=/, -Des.path.home=/usr/share/elasticsearch, -Des.path.conf=/usr/share/elasticsearch/config, -Des.distribution.flavor=default, -Des.distribution.type=tar]
[35mhyp_elasticsearch |[0m [2019-04-21T13:49:37,301][INFO ][o.e.p.PluginsService ] [99WePjl] loaded module [aggs-matrix-stats]
[35mhyp_elasticsearch |[0m [2019-04-21T13:49:37,302][INFO ][o.e.p.PluginsService ] [99WePjl] loaded module [analysis-common]
[35mhyp_elasticsearch |[0m [2019-04-21T13:49:37,302][INFO ][o.e.p.PluginsService ] [99WePjl] loaded module [ingest-common]
[35mhyp_elasticsearch |[0m [2019-04-21T13:49:37,302][INFO ][o.e.p.PluginsService ] [99WePjl] loaded module [lang-expression]
[35mhyp_elasticsearch |[0m [2019-04-21T13:49:37,302][INFO ][o.e.p.PluginsService ] [99WePjl] loaded module [lang-mustache]
[35mhyp_elasticsearch |[0m [2019-04-21T13:49:37,302][INFO ][o.e.p.PluginsService ] [99WePjl] loaded module [lang-painless]
[35mhyp_elasticsearch |[0m [2019-04-21T13:49:37,302][INFO ][o.e.p.PluginsService ] [99WePjl] loaded module [mapper-extras]
[35mhyp_elasticsearch |[0m [2019-04-21T13:49:37,303][INFO ][o.e.p.PluginsService ] [99WePjl] loaded module [parent-join]
[35mhyp_elasticsearch |[0m [2019-04-21T13:49:37,303][INFO ][o.e.p.PluginsService ] [99WePjl] loaded module [percolator]
[35mhyp_elasticsearch |[0m [2019-04-21T13:49:37,303][INFO ][o.e.p.PluginsService ] [99WePjl] loaded module [rank-eval]
[35mhyp_elasticsearch |[0m [2019-04-21T13:49:37,303][INFO ][o.e.p.PluginsService ] [99WePjl] loaded module [reindex]
[35mhyp_elasticsearch |[0m [2019-04-21T13:49:37,303][INFO ][o.e.p.PluginsService ] [99WePjl] loaded module [repository-url]
[35mhyp_elasticsearch |[0m [2019-04-21T13:49:37,303][INFO ][o.e.p.PluginsService ] [99WePjl] loaded module [transport-netty4]
[35mhyp_elasticsearch |[0m [2019-04-21T13:49:37,303][INFO ][o.e.p.PluginsService ] [99WePjl] loaded module [tribe]
[35mhyp_elasticsearch |[0m [2019-04-21T13:49:37,304][INFO ][o.e.p.PluginsService ] [99WePjl] loaded module [x-pack-core]
[35mhyp_elasticsearch |[0m [2019-04-21T13:49:37,304][INFO ][o.e.p.PluginsService ] [99WePjl] loaded module [x-pack-deprecation]
[35mhyp_elasticsearch |[0m [2019-04-21T13:49:37,304][INFO ][o.e.p.PluginsService ] [99WePjl] loaded module [x-pack-graph]
[35mhyp_elasticsearch |[0m [2019-04-21T13:49:37,304][INFO ][o.e.p.PluginsService ] [99WePjl] loaded module [x-pack-logstash]
[35mhyp_elasticsearch |[0m [2019-04-21T13:49:37,304][INFO ][o.e.p.PluginsService ] [99WePjl] loaded module [x-pack-ml]
[35mhyp_elasticsearch |[0m [2019-04-21T13:49:37,304][INFO ][o.e.p.PluginsService ] [99WePjl] loaded module [x-pack-monitoring]
[35mhyp_elasticsearch |[0m [2019-04-21T13:49:37,304][INFO ][o.e.p.PluginsService ] [99WePjl] loaded module [x-pack-rollup]
[35mhyp_elasticsearch |[0m [2019-04-21T13:49:37,305][INFO ][o.e.p.PluginsService ] [99WePjl] loaded module [x-pack-security]
[35mhyp_elasticsearch |[0m [2019-04-21T13:49:37,305][INFO ][o.e.p.PluginsService ] [99WePjl] loaded module [x-pack-sql]
[35mhyp_elasticsearch |[0m [2019-04-21T13:49:37,305][INFO ][o.e.p.PluginsService ] [99WePjl] loaded module [x-pack-upgrade]
[35mhyp_elasticsearch |[0m [2019-04-21T13:49:37,305][INFO ][o.e.p.PluginsService ] [99WePjl] loaded module [x-pack-watcher]
[35mhyp_elasticsearch |[0m [2019-04-21T13:49:37,306][INFO ][o.e.p.PluginsService ] [99WePjl] loaded plugin [ingest-geoip]
[35mhyp_elasticsearch |[0m [2019-04-21T13:49:37,306][INFO ][o.e.p.PluginsService ] [99WePjl] loaded plugin [ingest-user-agent]
[34mhyp_kibana |[0m {"type":"log","@timestamp":"2019-04-21T13:49:37Z","tags":["status","plugin:kibana@6.4.3","info"],"pid":1,"state":"green","message":"Status changed from uninitialized to green - Ready","prevState":"uninitialized","prevMsg":"uninitialized"}
[34mhyp_kibana |[0m {"type":"log","@timestamp":"2019-04-21T13:49:37Z","tags":["status","plugin:elasticsearch@6.4.3","info"],"pid":1,"state":"yellow","message":"Status changed from uninitialized to yellow - Waiting for Elasticsearch","prevState":"uninitialized","prevMsg":"uninitialized"}
[34mhyp_kibana |[0m {"type":"log","@timestamp":"2019-04-21T13:49:37Z","tags":["status","plugin:timelion@6.4.3","info"],"pid":1,"state":"green","message":"Status changed from uninitialized to green - Ready","prevState":"uninitialized","prevMsg":"uninitialized"}
[34mhyp_kibana |[0m {"type":"log","@timestamp":"2019-04-21T13:49:37Z","tags":["status","plugin:console@6.4.3","info"],"pid":1,"state":"green","message":"Status changed from uninitialized to green - Ready","prevState":"uninitialized","prevMsg":"uninitialized"}
[34mhyp_kibana |[0m {"type":"log","@timestamp":"2019-04-21T13:49:37Z","tags":["status","plugin:metrics@6.4.3","info"],"pid":1,"state":"green","message":"Status changed from uninitialized to green - Ready","prevState":"uninitialized","prevMsg":"uninitialized"}
[34mhyp_kibana |[0m {"type":"log","@timestamp":"2019-04-21T13:49:37Z","tags":["info","http","server","listening"],"pid":1,"message":"Server running at http://0:5601"}
[34mhyp_kibana |[0m {"type":"log","@timestamp":"2019-04-21T13:49:37Z","tags":["error","elasticsearch","admin"],"pid":1,"message":"Request error, retrying\nHEAD http://elasticsearch:9200/ => connect ECONNREFUSED 172.18.0.4:9200"}
[34mhyp_kibana |[0m {"type":"log","@timestamp":"2019-04-21T13:49:37Z","tags":["warning","elasticsearch","admin"],"pid":1,"message":"Unable to revive connection: http://elasticsearch:9200/"}
[34mhyp_kibana |[0m {"type":"log","@timestamp":"2019-04-21T13:49:37Z","tags":["warning","elasticsearch","admin"],"pid":1,"message":"No living connections"}
[34mhyp_kibana |[0m {"type":"log","@timestamp":"2019-04-21T13:49:37Z","tags":["status","plugin:elasticsearch@6.4.3","error"],"pid":1,"state":"red","message":"Status changed from yellow to red - Unable to connect to Elasticsearch at http://elasticsearch:9200/.","prevState":"yellow","prevMsg":"Waiting for Elasticsearch"}
[35mhyp_elasticsearch |[0m [2019-04-21T13:49:39,230][WARN ][o.e.d.s.ScriptModule ] Script: returning default values for missing document values is deprecated. Set system property '-Des.scripting.exception_for_missing_value=true' to make behaviour compatible with future major versions.
[34mhyp_kibana |[0m {"type":"log","@timestamp":"2019-04-21T13:49:40Z","tags":["warning","elasticsearch","admin"],"pid":1,"message":"Unable to revive connection: http://elasticsearch:9200/"}
[34mhyp_kibana |[0m {"type":"log","@timestamp":"2019-04-21T13:49:40Z","tags":["warning","elasticsearch","admin"],"pid":1,"message":"No living connections"}
[35mhyp_elasticsearch |[0m [2019-04-21T13:49:40,725][INFO ][o.e.x.s.a.s.FileRolesStore] [99WePjl] parsed [0] roles from file [/usr/share/elasticsearch/config/roles.yml]
[35mhyp_elasticsearch |[0m [2019-04-21T13:49:41,178][INFO ][o.e.x.m.j.p.l.CppLogMessageHandler] [controller/117] [Main.cc@109] controller (64 bit): Version 6.4.3 (Build 7a0781676dd492) Copyright (c) 2018 Elasticsearch BV
[35mhyp_elasticsearch |[0m [2019-04-21T13:49:41,843][INFO ][o.e.d.DiscoveryModule ] [99WePjl] using discovery type [single-node]
[35mhyp_elasticsearch |[0m [2019-04-21T13:49:42,539][INFO ][o.e.n.Node ] [99WePjl] initialized
[35mhyp_elasticsearch |[0m [2019-04-21T13:49:42,540][INFO ][o.e.n.Node ] [99WePjl] starting ...
[35mhyp_elasticsearch |[0m [2019-04-21T13:49:42,687][INFO ][o.e.t.TransportService ] [99WePjl] publish_address {172.18.0.4:9300}, bound_addresses {0.0.0.0:9300}
[35mhyp_elasticsearch |[0m [2019-04-21T13:49:42,746][WARN ][o.e.b.BootstrapChecks ] [99WePjl] max virtual memory areas vm.max_map_count [65530] is too low, increase to at least [262144]
[35mhyp_elasticsearch |[0m [2019-04-21T13:49:42,779][INFO ][o.e.x.s.t.n.SecurityNetty4HttpServerTransport] [99WePjl] publish_address {172.18.0.4:9200}, bound_addresses {0.0.0.0:9200}
[35mhyp_elasticsearch |[0m [2019-04-21T13:49:42,780][INFO ][o.e.n.Node ] [99WePjl] started
[34mhyp_kibana |[0m {"type":"log","@timestamp":"2019-04-21T13:49:43Z","tags":["status","plugin:elasticsearch@6.4.3","error"],"pid":1,"state":"red","message":"Status changed from red to red - Service Unavailable","prevState":"red","prevMsg":"Unable to connect to Elasticsearch at http://elasticsearch:9200/."}
[35mhyp_elasticsearch |[0m [2019-04-21T13:49:43,154][WARN ][o.e.x.s.a.s.m.NativeRoleMappingStore] [99WePjl] Failed to clear cache for realms [[]]
[35mhyp_elasticsearch |[0m [2019-04-21T13:49:43,191][INFO ][o.e.l.LicenseService ] [99WePjl] license [809f92fc-e758-4cd3-b9ec-813aa595b42b] mode [basic] - valid
[35mhyp_elasticsearch |[0m [2019-04-21T13:49:43,204][INFO ][o.e.g.GatewayService ] [99WePjl] recovered [0] indices into cluster_state
[34mhyp_kibana |[0m {"type":"log","@timestamp":"2019-04-21T13:49:45Z","tags":["status","plugin:elasticsearch@6.4.3","info"],"pid":1,"state":"green","message":"Status changed from red to green - Ready","prevState":"red","prevMsg":"Service Unavailable"}
Gracefully stopping... (press Ctrl+C again to force)
Last edited by goamaral (2019-04-21 13:54:45)
Offline
Some help?
Offline
Some help?
How to post. A sincere effort to use modest and proper language and grammar is a sign of respect toward the community.
Offline
it stopped working
That is not an error message. And what stopped working exactly?
Don't bump your thread. Instead, provide more relevant information. Is your title still relevant to your current problem? If not, mark this one as solved and then start a new one with better information on what steps you've taken, what you expect to see and then what happens instead, including relevant logs and configuration.
Offline
I am not sure, but I think the title still reflects my problem.
My previous reply, shows my docker-compose.yml. I also display the logs docker-compose up gives me.
If I try to access any of the services, they give me no response.
I think it has something to do with the network (general_network) created by the docker-compose
My trouble here is not having any error messages. If I try to access, by the browser, localhost:3001, I expect to be greeted with pgadmin login page and log to appear in the docker container logs stating that a request had been made. Instead, the request is left hanging, until it timeouts
Offline
If I try to access any of the services, they give me no response.
How exactly are you accessing those (more than one) services? Are you talking about just pgadmin running on port 3001? What other services are you accessing?
by the browser, localhost:3001
OK - what protocol are you using? Again, be more specific about what *exactly* you are doing. Is it http://localhost:3001 or https://localhost:3001?
Is there anything in the logs indicating that pgadmin is listening on port 3001? Is there an IP address that the container is getting assigned that you can try instead of 'localhost'?
Offline
The docker compose exposes pgadmin container port 80 to local port 3001.
As another example, the redis container runs on port 6379 in the container and is exposed to local port 6379.
I try to connect to redis, using the redis cli. I receive no response.
I also tried the same thing with mongodb, using the mongo cli.
I receive no response from no container created with docker-compose
The ports are assigned and in use. For example, I verify redis is binded to his port running ss -tulpn | grep 6379
I can get the ip of each container with(redis in this example):
docker inspect -f '{{range .NetworkSettings.Networks}}{{.IPAddress}}{{end}}' hyp_redis
I get an IP and try to connect using redis-cli using this IP as the host. I run:
redis-cli -h 172.18.0.5
And no response is returned
I think the IP above is in the docker network range
Last edited by goamaral (2019-04-25 17:53:47)
Offline