Caused by: java.io.IOException: Map failed

We got the following error recently in a case:

auto commit error...:org.apache.solr.common.SolrException: Error opening new searcher
at org.apache.solr.core.SolrCore.openNewSearcher(SolrCore.java:1521)
at org.apache.solr.update.DirectUpdateHandler2.commit(DirectUpdateHandler2.java:614)
at org.apache.solr.update.CommitTracker.run(CommitTracker.java:216)
at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:471)
at java.util.concurrent.FutureTask.run(FutureTask.java:262)
at java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.access$201(ScheduledThreadPoolExecutor.java:178)
at java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.run(ScheduledThreadPoolExecutor.java:292)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
at java.lang.Thread.run(Thread.java:745)
Caused by: java.io.IOException: Map failed
at sun.nio.ch.FileChannelImpl.map(FileChannelImpl.java:919)
at org.apache.lucene.store.MMapDirectory.map(MMapDirectory.java:283)
at org.apache.lucene.store.MMapDirectory$MMapIndexInput.<init>(MMapDirectory.java:228)

Essentially the problem here is that MMap is failing to map a file into virtual memory. Which is kind of a weird error, since it is not actually putting anything into memory, so you wouldn't think this should fail. In this case the problem was there was a limit set on how big the virtual memory can get by the OS. Since Solr Maps any of the files it is looking at, the VIRT number you look at with top can show a very large number. However, that is not what is actually in memory. 

The fix was simply to do a ulimit -v unlimited (or your value here). Then making the changes in the system configuration file /etc/sysctl.conf to make it permanent. 

 

Have more questions? Submit a request

0 Comments

Please sign in to leave a comment.
Powered by Zendesk