Kibana 503 Error License Not Available
Last updated: Jun 30, 2022
After setting up Security Onion as a single, standalone node, I ran into a strange issue when trying to access Kibana, with an unexpected solution. Here are my notes from troubleshooting this issue.
TLDR: I had to increase heap sizes.
…
I can’t login sometimes, and at othertimes can but get HTTP error 503 with “License is not available” message.
Ran sudo so-elastic-restart
, waited for it to finish. Loaded Kibana again. It worked temporarily. But after a reload shows 503 unavailable again.
Ran sudo so-kibana-restart
, waited to finish. Now Kibana says 404 page not found.
Checked elastic logs with less /opt/so/log/elasticsearch/seconion-2022-05-16.log.gz
Logs show dozens of lines of:
[2022-05-16T22:22:44,006][INFO ][org.elasticsearch.indices.breaker.HierarchyCircuitBreakerService] attempting to trigger G1GC due to high heap usage [604556304]
[2022-05-16T22:22:44,430][INFO ][org.elasticsearch.indices.breaker.HierarchyCircuitBreakerService] GC did bring memory usage down, before [604556304], after [597125592], allocations [1], duration [424]
This suggests memory usage is too high.
free -h
show 937M free, but due to how Docker is, it could sitll be a memory issue.
Checked kibana logs less /opt/so/log/kibana/kibana-20220515.log.gz
:
{"ecs":{"version":"1.12.0"},"@timestamp":"2022-05-16T00:00:53.713+00:00","message":"Failed to poll for work: [parent] Data too large, data for [<http_request>] would be [598877710/571.1mb], which is larger than the limit of [597688320/570mb], real usage: [598865256/571.1mb], new bytes reserved: [12454/12.1kb], usages [request=0/0b, fielddata=127489/124.5kb, in_flight_requests=12454/12.1kb, model_inference=0/0b, eql_sequence=0/0b, accounting=25865836/24.6mb]","log":{"level":"ERROR","logger":"plugins.taskManager"},"process":{"pid":293}}
Shutdown box, replaced 16GB DDR4 2666MHz with 32GB DDR4 (3200MHz XMP) - didn’t bother with HP BIOS to see what speed it is actually running at.
Memory usage looks much happier now:
~: free -h total used free shared buff/cache available Mem: 31G 1.4G 28G 9.6M 795M 29G Swap: 8.0G 0B 8.0G
Now I watch -n 1 so-status
to see when all systems are a go…
Unfortunately, kibana still shows “404 page not found”. Time to check logs again.
Restarted again with so-elasticsearch-restart. Now it said:
Application Not Found
No application was found at this URL. Try going back or choosing an app from the menu."
Reloaded again and this time it came up, with dozens of “x of 82 shards failed” errors.
It worked for a little bit again but after trying to browse, it once again shows 503 Service Unavailable.
In elastic log I see:
[2022-05-17T23:19:13,889][ERROR][org.elasticsearch.xpack.core.async.AsyncTaskIndexService] failed to store async-search [fQA8883CQluSmTz78gEwWg]
org.elasticsearch.common.breaker.CircuitBreakingException: [parent] Data too large, data for [<reused_arrays>] would be [603556864/575.5mb], which is larger than the limit of [597688320/570mb], real usage: [603556728/575.5mb], new bytes reserved: [136/136b], usages [request=136/136b, fielddata=7770/7.5kb, in_flight_requests=33770/32.9kb, model_inference=0/0b, eql_sequence=0/0b, accounting=30806036/29.3mb]
It seems this is the problem I’m having: Kibana becomes unavailable because of “Data too large”. #56500
Seems like I need to increase heap size in Elastic search through docker, like this person asks for help with.
I think I may have found instructions. But they are outdated.
…
After more searching, I found glorious documentation on changing the heap! I had trouble finding this at first since I didn’t know what I was looking for, and searching the docs for keywords I was seeing wasn’t bringing anything up at first.
The location of the config file is noted in the docs as being here:
opt/so/saltstack/local/pillar/minions/$minion.sls
But in my case the file is here:
/opt/so/saltstack/local/pillar/minions/seconion_standalone.sls
In this file I changed esheap
in 2 locations and lsheap
in 1 location to 5000m
. Maybe overkill, but it’s in the spirit of troubleshooting - plus, I have the RAM for it now.
I then restarted with sudo so-elasticsearch-restart
and sudo so-kibana-restart
for good measure.
After elastic and kibana came back up, I logged into Kibana and it loaded normally. I verified expected functionality and watched the logs, and everything looks good!
Moral of the story - check your documentation, even if you don’t immediately find what you are looking for!