Skip navigation

Tag Archives: unix

Due to a disk failure on a server I needed to preserve several databases created for students. Some of them were empty, consecuently doing:
mysqldump with the option --all-databases yielded an error and didn’t work.

What worked instead, was iterating through and mysqldumping them individually with this shell oneliner:

for I in $(mysql -uroot -p****** -e 'show databases'); do  mysqldump -uroot -p****** $I > $I.sql; done;

I compressed those created files into one tar inmediately after:

ls -1 | xargs tar -cvzf backupOfDatabases.tgz
Advertisements

Writing heredocs in the shell from emacs presented to me a little nuisance: since the RETurn key is bound to ‘comint-send-input -a function that sends the text to the shell process- I coudn’t re-edit the previous line after having pressed it to insert a simple line break. (I should add that this is actually the usual unix bash behaviour, I only happen to use bash from emacs and so wanted to overcome the problem there)

Yesterday it dawned on me to just use “C-q C-j” under that situation, (that’s how you stick in a line feed in emacs regardless of the mode you are in).

Given that the perspective of typing “C-q C-j” several times looked rather awkward I thought of binding RET with the control key for that:

(global-set-key (quote [(control return)]) (quote newline))

But then, the idea of going the other way around showed more appealing: why not to make newlines with a simple RET, and use the Control key to send the input? Besides, it makes sense to prevent hitting return by accident, I hate when my pinky lands there when searching the pipe “|” key.

While at it I decided to also rebind the TAB functionality so I could further improve the readability of my scripts by indenting them at the same time.

Implementing those changes is simple when you’ve got the luxury of a highly-configurable tool:

(defun my-shell-hook ()
  (local-set-key (quote [(return)]) (quote newline))
  (local-set-key (quote [(control return)]) (quote comint-send-input))
  (local-set-key (kbd "M-i")  'my-unindent)
  (local-set-key (kbd "C-i")  'indent-or-complete))

(add-hook 'shell-mode-hook 'my-shell-hook)

UPDATE: After a day trying this set-up I think that I’d rather stick to the normal binding of the RET key. I’m leaning toward using C-j and C-m instead, as I caught myself reaching to either of them to insert carriage returns. Here are the modified bindings in replacement:

(defun my-shell-hook ()
  (local-set-key (kbd "M-i")  'my-unindent)
  (local-set-key (kbd "C-i")  'indent-or-complete)
; (local-set-key (quote [(return)]) (quote newline))
; (local-set-key (quote [(control return)]) (quote comint-send-input))
;; I actually liked these two better for adding  newlines, leaving the return with it's default behavior
  (local-set-key (quote [(control j)]) (quote newline))
  (local-set-key (quote [(control m)]) (quote newline))
  (local-set-key (quote [(return)]) (quote comint-send-input)))


(add-hook 'shell-mode-hook 'my-shell-hook)

Now that I’m getting fond of using a heredoc to insert mysql scripts into the bash shell, have put this into my emacs initialization file. The same shortcut (Control+Shift+f11) will conveniently write different parameters depending in which shell I’m sitting in.

UPDATE: seems that I spoke too soon. Asking about the present working directory, formerly “(let ((a (shell-command-to-string “pwd | tr -d ‘\\n'”)))”, didn’t actually work when having opened shells of different servers via Tramp.
The correction below (less elegant) depends on the shell buffer being created/renamed with an identifiable name -which could be the hostname or whatever consistent nomenclature we choose-.
Anyway, until figuring a better way, this does the job of inserting the right parameters into each shell:

(defun my-heredoc-sql-invocation-from-shell ()
"Insert the appropiate parameters to run a heredoc mysql query depending on which shell I'm in"
  (interactive)
  (let ((a (buffer-name (window-buffer (minibuffer-selected-window))))
        (b nil))
    (cond ((string-match "serverA" a) 
           (setq b "mysql -uroot mainDbToHitAt_A--password=`cat /etc/security/mysqlpassword` -t -vv <<\\!"))
          ((string-match "serverB" a) 
           (setq b "mysql -ualpha mainDbToHitAt_B --password=`cat /etc/security/mysqlpassword` -t -vv <<\\!"))
          ((string-match "serverC" a) 
           (setq b "mysql -uroot mainDbToHitAt_C  -t <<\\!")))
    (insert b)))

;; key shortcut to bind it to
(global-set-key (kbd "C-S-<f11>")  'my-heredoc-sql-invocation-from-shell)

here’s an example of how to run php in the command line, something I almost never do and would like to remember by writing it down.
Simply echo the php statement (within its corresponding tags) and pipe it to the php service. Eg:

echo ‘<?php $str = ‘2144338’; echo substr($str,0,2); ?>’ | php

or even more usefully:

echo ‘<?php¬† phpinfo(); ?>’ | php

I struggled the other day doing some sys admin work for recovering data from a single table of our database. Editing big files (of about several gigabites) is no-picnic even for vi(m) or emacs, so it wasn’t trivial to find a quick way to isolate the parts needed. For what is worth here’s the method I’ve followed with success resourcing to simple cat and sed commands in my command line:

  1. Get the creation statement for the table to be recovered

    cat your_entire_backup_file.sql | sed -ne '/Table structure for table `your_table`/,/-- Dump/p' > table_creation.sql
    
  2. Get the data

    cat your_entire_backup_file.sql | sed -ne '/INSERT INTO `your_table`/,/Table structure/p' > data_dump.sql
    
  3. Join the two into a single file

    cat table_creation.sql data_dump.sql > data_for_single_table_to_copy
    
  4. Optionally, in case you need to extract some rows only from that previous instance of the table, as it was my case with records deleted by mistake, you might want to create a temporary table from where to later perfom the selection of the desired rows. In order to do that, the table name should be altered from the creation and insertion statements:

    sed -i 's#your_table#your_temp_table#g' data_for_single_table_to_copy.sql
    
  5. Now we are ready to create that temporary table with its data inside our database:

    mysql -u 'your_username -p your_database_name < 'your_path_to_the_file/data_for_single_table_to_copy.sql
    
  6. Voila!, the table is there containing the information you needed. Now is up to you to extract and reinsert whatever you wanted inside the original table

Note: See that different parameters could be used to isolate and put together table creation and data parts in only one pass. Also the awk command might be used instead since, like sed, it permits collecting portions by matching from the beginning to the ending block of text. Just make sure you know the order of the table after the one you are picking.

awk '/Table structure for table `your_table`/,/Table structure for table `your_next_table`/{print}' your_entire_backup_file.sql > data_for_single_table_to_copy.sql

In case the table to extract happens to be the last one (which again, you could just know with a mysq “show tables” command, modify the last part of the regexp to match it accordingly.

The simple solution with awk is doing:

awk ‘{print $NF}’

This is useful when like in the following  we want to list just files skipping directories:

ls -l | awk ‘NR!=1 && !/^d/ {print $NF}

There are other options of course, take a look at this example from a nice collection of bash and unix tips

 

 

This is more of quick reminder for myself on the difficult task of escaping quotes and running an awk or sed command inside the bash command line. I solved the issue in my case using the octal representation of a single quote (\47), taken from here, where more options are shown.

Having previously accommodated the list of stale links with their replacements in a two columns format that looked like:

http://addressOld.com http://www.goodWorkingOne.com
http://addressOld2.com http://www.goodWorkingOne2.com
etc…

I used the following:

awk '{print "UPDATE library SET url=\47"$2"\47 WHERE url=\47"$1"\47;"}' 

to correctly generated what I wanted sql-postgres statements to update the links table:

UPDATE links SET url='http://goodWorkingOne.com' WHERE url='http://addressOld.com';

Using an interactive shell in Emacs for Windows you might enconter a bug that causes the shell process to terminate when you signal and eof (See point 7.6 here)
If, for example, you try:

/Program Files/Emacs/ cat >newfile
this is a test file
being created with the cat command
directly from the user input 
in shell.

Now, by pressing C-d (or C-c C-d) the input should finish here, and the prompt return, but what happens instead, is that the shell process terminates. Same thing occurs if you try to M-x comint-send-eof or if you pick an eof signal from the menu.

I had noticed that before (in my Windows emac, only not under Unix), but last night I found that an eof sign could be generated by typing C-q C-z.

I choose to set an alias and bind it to some keys in my .emacs to overcome the issue:

(defalias 'eof
  (read-kbd-macro "C-q C-z RET"))
(global-set-key [M-S-f8] 'eof)

Of course a better solution would be to advice the comint-send-eof to procede accordingly when in emacs for Windows. Anybody knows how to do it, or mind to share a stronger fix than this hack? Any feedback will be appreciated.

Update:
Actually there’s no need to set an alias, as I learned that this is also possible and more straight-forward:

(global-set-key [(meta shift f8)] 
 '(lambda () (interactive) (execute-kbd-macro((read-kbd-macro "C-q C-z RET")))))

On how to delete a chunk of text contained in multiple lines: use sed to catch the range (/a/,/b/)

There were some javascript google ads calls from a page I downloaded via wget, doing:

 wget www.someSite.com/somePage.html > toCleanItUp.htlm

After highlighting all the page C-x h I did a M-x shell-command-on-region and used the following:

sed "/<script .*>/,/<\/script>/d" > nowItIsClean.html

If feeling lazy, (or if I don’t have the file opened already in an emacs buffer), there’s the straightforward way of using cat:

cat toCleanItUp.html | sed "/<script .*>/,/<\/script>/d" > nowItIsClean.html

Ah, the beauty of unix tools!

This one was eluding me for a while, what I simple wanted is to use cp (copy) with xargs to duplicate files in a different destination. What I had used before is the command copy in combination with find, via the exec argument like this:

find . -maxdepth 1 -name 'in*.css' -exec cp '{}' ~/src/Report/geo_portal/t/tmpl/src/css/ \;

But I wanted to use xargs, since I pretty much like its elegance and possibilites. The example below is a real one I used in BSD 7:

find . -name '*.html' | xargs grep -l 'Datasets' | xargs -J % cp -rp % ~/src/Report/playground/t/tmpl/src/css/

The explanation lies in the xargs manual, albeit a bit obscure:

-J replstr
If this option is specified, xargs will use the data read from standard input to replace the first occurrence of replstr instead of appending that data after all other arguments. This option will not affect how many arguments will be read from input (-n), or the size of the command(s) xargs will generate (-s). The option just moves where those arguments will be placed in the command(s) that are executed. The replstr must show up as a distinct argument to xargs. It will not be recognized if, for instance, it is in the middle of a quoted string. Furthermore, only the first occurrence of the replstr will be replaced. For example, the following command will copy the list of files and directories which start with an uppercase letter in the current directory to destdir:

/bin/ls -1d [A-Z]* | xargs -J % cp -rp % destdir

Note: When I wanted to use it in my Cygwin shell I had to replace the option “-J” with “-I”. In this example I copied every image lighter that 3000k into other folder.

ls -ltra | awk '{print $5," ",$9}' | awk '$1 < 3000' | awk '/gif|png|jpg/ {print $2}' | xargs -I % cp -rp % /web/R/images/