Skip navigation

Category Archives: programming

general stuff about languages and software tools

(This is mainly a remainder post for myself)
For certain reasons I sometimes have to edit text pasted from an emacs buffer that I was editing with the longlines-mode enabled. Hence as this mode does, the paragraphs are hard wrapped beyond a certain amount of characters (when they extend over ‘fill-column’ lenght).

Although “the soft newlines used for line wrapping will not show up when the text is yanked or saved to disk”, they will remain if, say, I had carelessly pasted it directly into a gmail form to save for later reuse there.

My way to remove those artificially-inserted line breaks, is running this oneliner on the text region.

sed -ne '1h;1!H;${;g;s#\n\([^\n]\)# \1#g;p}' | sed -e 's#^[ \t]*\(.*\)$#\1#g'

(The first sed command tells to put a space and remove the line break. using the multiline search and replace method
The second just gets rid of the leading white space at the beginning of line)

Advertisements

Now that I’m getting fond of using a heredoc to insert mysql scripts into the bash shell, have put this into my emacs initialization file. The same shortcut (Control+Shift+f11) will conveniently write different parameters depending in which shell I’m sitting in.

UPDATE: seems that I spoke too soon. Asking about the present working directory, formerly “(let ((a (shell-command-to-string “pwd | tr -d ‘\\n'”)))”, didn’t actually work when having opened shells of different servers via Tramp.
The correction below (less elegant) depends on the shell buffer being created/renamed with an identifiable name -which could be the hostname or whatever consistent nomenclature we choose-.
Anyway, until figuring a better way, this does the job of inserting the right parameters into each shell:

(defun my-heredoc-sql-invocation-from-shell ()
"Insert the appropiate parameters to run a heredoc mysql query depending on which shell I'm in"
  (interactive)
  (let ((a (buffer-name (window-buffer (minibuffer-selected-window))))
        (b nil))
    (cond ((string-match "serverA" a) 
           (setq b "mysql -uroot mainDbToHitAt_A--password=`cat /etc/security/mysqlpassword` -t -vv <<\\!"))
          ((string-match "serverB" a) 
           (setq b "mysql -ualpha mainDbToHitAt_B --password=`cat /etc/security/mysqlpassword` -t -vv <<\\!"))
          ((string-match "serverC" a) 
           (setq b "mysql -uroot mainDbToHitAt_C  -t <<\\!")))
    (insert b)))

;; key shortcut to bind it to
(global-set-key (kbd "C-S-<f11>")  'my-heredoc-sql-invocation-from-shell)

Despite being dealing with databases for some years it was just recently that I started to run mysql queries straight from the shell by doing something like:

mysql -u(user) -p(password)  -t -e ‘whatever sql query here in one line’

Although a handy way to use every now and then, I still felt this sort of limited since the query to be passed in the “-e” argument must go inside a single line. In search of  a better alternative I was pleased to figure instead that it’s possible to combine heredoc sintaxis with the sql execution in the command line. Heredoc is a mechanism to break text into multiple lines, just as a file would be treated, after the “<<” sign, the parser says: “here follows a document” which gets wrapped inside the pair of whatever text delimeters we choose. The obvious convenience is that without having to create a file we could easily run multiline queries -which read more naturally with line breaks and indentation- in the same way we would write them inside a file.

To quickly see what I’m talking about type

mysql -u(yourDatabaseUserName) -p(your Password)   -t<<eof
show databases
eof

Now hit “Control D” and then, “Enter” to see the output directly thrown at your shell

To speed things up, since  I use the shell inside emacs, I bound the following to C-S f11 adding  what is below into my .emacs file:

(set-register ?q “mysql -u kabul -p ******** -t <<eofsql”)
(defalias ‘Q
(read-kbd-macro “M-x insert-register RET Q C-e “))
(global-set-key [C-S-f11] ‘Q)

In a bash shell, I knew about the way to produce the output of a query into a tab delimited file by simply running:

mysql -uUSER -pPASSW < fileWithSQLquery.sql > fileToSave.tab

Now in this case what I needed was a CSV file, for which I piped it to sed reeplacing tabs by commas doing:

mysql -uUSER -pPASSW < fileWithSQLquery.sql | sed -e ‘s#\t#, #g’ > fileToSave.csv

here’s an example of how to run php in the command line, something I almost never do and would like to remember by writing it down.
Simply echo the php statement (within its corresponding tags) and pipe it to the php service. Eg:

echo ‘<?php $str = ‘2144338’; echo substr($str,0,2); ?>’ | php

or even more usefully:

echo ‘<?php  phpinfo(); ?>’ | php

I sometimes forget about the exact name of a file I was working a while ago. Luckily the way to state a range of time with find is quite intuitive; say that we want to see the files modified in a lapse of 14 days starting from a week ago:

find . -ctime +7  -ctime -21 # (older than 7 days but newer than three weeks ago)

There are simply 3 options of numeric arguments for time

  1. +n –> More than ‘n’ days ago.
  2. -n –> Less than ‘n’ days ago.
  3. n —> Exactly ‘n’ days ago.

I struggled the other day doing some sys admin work for recovering data from a single table of our database. Editing big files (of about several gigabites) is no-picnic even for vi(m) or emacs, so it wasn’t trivial to find a quick way to isolate the parts needed. For what is worth here’s the method I’ve followed with success resourcing to simple cat and sed commands in my command line:

  1. Get the creation statement for the table to be recovered

    cat your_entire_backup_file.sql | sed -ne '/Table structure for table `your_table`/,/-- Dump/p' > table_creation.sql
    
  2. Get the data

    cat your_entire_backup_file.sql | sed -ne '/INSERT INTO `your_table`/,/Table structure/p' > data_dump.sql
    
  3. Join the two into a single file

    cat table_creation.sql data_dump.sql > data_for_single_table_to_copy
    
  4. Optionally, in case you need to extract some rows only from that previous instance of the table, as it was my case with records deleted by mistake, you might want to create a temporary table from where to later perfom the selection of the desired rows. In order to do that, the table name should be altered from the creation and insertion statements:

    sed -i 's#your_table#your_temp_table#g' data_for_single_table_to_copy.sql
    
  5. Now we are ready to create that temporary table with its data inside our database:

    mysql -u 'your_username -p your_database_name < 'your_path_to_the_file/data_for_single_table_to_copy.sql
    
  6. Voila!, the table is there containing the information you needed. Now is up to you to extract and reinsert whatever you wanted inside the original table

Note: See that different parameters could be used to isolate and put together table creation and data parts in only one pass. Also the awk command might be used instead since, like sed, it permits collecting portions by matching from the beginning to the ending block of text. Just make sure you know the order of the table after the one you are picking.

awk '/Table structure for table `your_table`/,/Table structure for table `your_next_table`/{print}' your_entire_backup_file.sql > data_for_single_table_to_copy.sql

In case the table to extract happens to be the last one (which again, you could just know with a mysq “show tables” command, modify the last part of the regexp to match it accordingly.

Having discovered the command cpio (wich acts like tar but preserving symbolic links) to recursively put into a single file all the directory structure and files of an entire website, I started to explore its use along with other ways to securely move files between hosts. The command rsync seems like an extraordinary find in that regard.

Here are some neat examples for backing up (or mirroring) sites with ssh I don’t want to miss:

There’s a really easy way to make a remote directory be an exact match of a local directory via an ssh connection. And that’s to use rsync to do the job. It’s smart and only sends enough data to sync the remote dir to the local one. Assuming that you have a local dir named master-copy and the remote system’s repository is in a directory named slave-copy you’d do:

rsync --rsh=ssh --delete -Cav master-copy/* user-name@remote.dom.tld:/path-to/slave-copy

That command will recursively sync everything found in master-copy to the remote. Any files found on the remote that aren’t in master-copy or one of its sub-dirs will be deleted from master-copy. Only changed or new files will be transferred to the remote.

to which another expert adds:

The rsync idea is good — but I for my part always try NOT to install any additional software not essentially required. Today, ssh is “a must” on most systems and you can go with the cpio approach I’ve mentioned.
I must admit the rsync thing is transferring less data if stuff is already on the target side, but this is not the case when running
source=/var/tmp ; target=/var/backup
cd $source ; find . -mtime -1 -type f -print | cpio -ocB | ssh -l username host-b "cd $target ; cpio -icBduml"

daily.

Another suggestion to copy the entire directory tree at /some/path to other.host via SSH is to do:

find /some/path -print | sort | cpio -o -Hnewc | ssh -C other.host "cpio -idvum"

Finally, there’s more to chew with this thorough example on how to mirror a site with rsync http://www.askapache.com/security/mirror-using-rsync-ssh.html

Found this gem , with about all you need to know for setting the apache directives on websites.
From the same guy that knows his stuff, this note http://www.askapache.com/security/mirror-using-rsync-ssh.html on using the powerful rsync command

After updating to the newest 1.7.7 version of cygwin (you have to do uname -a to know which one you are running) I found that awk was replaced by gawk, which not only would affect many of my scripts and shell functions but also to me feels harder to type in the command line.

I found that you could simply copy one to another so to have awk as an alias of gawk.
In my setup, all it took was:

cd c:/cygwin/bin
cp gawk awk