These are all based on my experience. These steps might not be the best methods. However, this is according to my experience and might save you long hours of stress. Feel free to drop a comment, question, suggestion or correction.
To resolve this issue, I needed to modify /etc/httpd/conf.d/autoindex.conf, /etc/httpd/conf.d/userdir.conf , and /etc/httpd/conf/httpd.conf.
Find where there is Indexes as part of the directory Options like
Options MultiViews Indexes SymLinksIfOwnerMatch IncludesNoExec
delete the word Indexes from all the options.
Call the code like this
THEANO_FLAGS='floatX=float32' python xxx.py
Cause by a depreciation in numpy 12.x.x. Install numpy 11.x.x or add
.astype(np.int) to the part causing the error.
pip install -U numpy==1.11.0
.
Happens when working with csv with large cell text. From a stackoverflow question, I found a solution ( to set
csv.field_size_limit to maxInt
maxInt = sys.maxsize decrement = True while decrement: # decrease the maxInt value by factor 10 # as long as the OverflowError occurs. decrement = False try: csv.field_size_limit(maxInt) except OverflowError: maxInt = int(maxInt/10) decrement = True
This Solution tries to avoid the overflow error.
Edit the /etc/logstash/logstash.jvm add or edit the lines
-Xms4g -Xmx4g
set the size of memory you want. Here I used 4GB
set the correct permissions to the new data and log directories (i.e) set elasticsearch as the owner of these directories
sudo chown -R elasticsearch /path/to/directory
copy table_name(column1, column2) to '/path/to/file.csv' delimiter E'\t' csv header;
grep -rnw '/path/to/folder/' -e "search string"
For example to extract from line 20397949 to line 20406761. The lines are extracted from extract_from_file and written to extract_to_file
sed -n 20397949,20406761p extract_from_file > extract_to_file
grep -nr search_text File_name
© 2017 UCAKU, Inc. All rights reserved.