I was thinking to convert to systemd for quite some time now
So every time someone mentioned something about systemd (on the internet), i was reading his/her story as my life depend on it.
I am using archlinux so when i’ve read Jason’s blog post,
i was very happy. After a few days, Allan post a similar post
and that was the moment i told my self: “It’s time, i can blame Allan for breaking my system”
I run this command
# pacman -S systemd systemd-arch-units systemd-sysvcompat
and removed sysvinit & initscripts also.
Noticed that /etc/rc.conf became /etc/rc.conf.pacsave
and rebooted my machine.
How difficult is that ?
There was also a few steps that i needed to do.
Your reading material is here: Archlinux systemd and
systemd services .
After that was trivial to enable my services.
I have only a few of them:
# grep DAEMONS /etc/rc.conf.pacsave
DAEMONS=(syslog-ng network crond dbus avahi-daemon cupsd xinetd)
I use static network at work.
Followed this link to create my network service.
vim /etc/conf.d/network
vim /etc/systemd/system/network.service
# systemctl status network
# systemctl enable network.service
# systemctl status syslog-ng
# systemctl enable syslog-ng.service
be aware that cron is cronie !
systemctl status crond.service
systemctl enable cronie.service
systemctl status avahi-daemon
systemctl enable avahi-daemon.service
dbus was already enabled
systemctl status dbus
be aware that cupsd is cups
systemctl status cupsd
systemctl enable cups.service
and finally
systemctl status xinetd
systemctl enable xinetd.service
It was simplest than converted from grub to grub2 !
New installation guide, with screenshots,
for Arch Linux based on installation media 2012.08.04
Archlinux NetInstall based on media 2012.08.04
This guide doesnt use any automate script or menu installer.
a basic net-installation, without a menu installer or any automate script
#TinyCore Linux – Remaster http://ur1.ca/9qgcb
from Updates from ebalaskas on Identi.ca!
I’ve found that the best way to test something in virtualization is through snapshots.
But why snapshot the running/active virtual machine and not the backup/clone virtual machine ?
# virsh list --all
Id Name State
----------------------------------------------------
- winxp running
- winxpclone shut off
Check the clone disk format:
# qemu-img info winxpclone.disk
image: winxpclone.disk
file format: raw
virtual size: 5.0G (5368709120 bytes)
disk size: 3.1G
And remember to convert the raw disk to qcow2 first:
# qemu-img convert -f raw winxpclone.disk -O qcow2 winxpclone.qcow2
And then edit your clone:
# virsh edit winxpclone
to use the qcow2 disk
and finally:
# virsh snapshot-create winxpclone
Domain snapshot 1341315833 created
List the snapshots:
# virsh snapshot-list winxpclone
Name Creation Time State
------------------------------------------------------------
1341315833 2012-07-03 14:43:53 +0300 shutoff
I needed to clone a virtual win2003 machine to a nas storage.
My storage is a lvm partition.
A. Suspend the virtual machine:
# virsh suspend win2003
B. Clone the virtual machine:
# virt-clone -d -o win2003 -n win2003clone -f /nas/storage/win2003clone.raw
This command will change the name, UUID, mac address and of course storage source.
C. Resume the virtual machine:
# virsh resume win2003
Remember that you have to change the IP of the clone, so that will not conflict with the original.
Some extra tips:
If you need to change something before the clone procedure, dump the xml from the virtual machine:
- Dump xml
# virsh dumpxml win2003 > win2003clone.xml
- Edit xml
# vim win2003clone.xml
- Clone the virtual machine
# virt-clone -d --original-xml=/home/ebal/win2003.clone.xml -n win2003clone -f /nas/storage/win2003clone.raw --force
Dynamic allocation of a virtual hard disk to a virtual machine:
# lvcreate -L 80G -n vg01/data
# virsh attach-disk win2008 /dev/vg_telekvm/profiles vdb
You need to attach a usb device to a libvirt domain without rebooting the virtual machine.
Lets figure this together:
- Locate the usb device:
# lsusb -v
idVendor 0x0781 SanDisk Corp.
idProduct 0x5567 Cruzer Blade
- Build the below XML:
usb_device.xml
<hostdev mode='subsystem' type='usb' managed='yes'>
<source>
<vendor id='0x0781'/>
<product id='0x5567'/>
</source>
</hostdev>
- Attach device
# virsh attach-device VIRTUAL_MACHINE usb_device.xml
and if you want to de-attach:
- De-attache device
# virsh detach-device VIRTUAL_MACHINE usb_device.xml
In our business there are times that you have to work with windows boxes.
The main problem with that is that they are constantly broken.
So is there a efficient way to build a proper backup image ?
The answer is YES, by using SystemRescueCd and ntfsclone
A simple and mini howto is here: http://balaskas.gr/wiki/ntfsclone
Enjoy
Who s/he claims that knows everything, must be constantly being asked by everyone else.
If you want to ask me something, do it public.
I have some knowledge of linux stuff, but i dont know everything.
This is a well known fact.
If you ask something in public then there will be something magical.
1.
More people could experience the same problem.
They will immediately comment that they have the exact same problem with you.
You will feel that you are not alone in this world!
2.
Some nerds will try to answer you before you even complete your question. Dont pay any attention to them. But your post will be the top of every search in a few seconds. Dont be bias at them. They have good at their heart.
3.
Some geeks will try to answer you with LMGTFY and wikipedia comments. Dont feel sorry about that. You should have read the manual in the first time, so this is your fault. Read them know.
4.
Some guru will answer to you. Be very carefully. Statistics are against you. From time to time you will cry like a baby by these answers. Guru’s have the ability to amplify all problems and resolve them in a way that even you can understand them.
5.
From the moment you asked something public, there is a trace that you exist! People now know you!!! You are the newbi3_1995 (or something).
6.
If you asked something that someone else have already asked, then you feel this magical connection to him. Internet love is true, embrace it. But seriously you have solved your problem with just a few links to the first question (back in 2005)!.
7.
And the most importance of all, is of course that you will not bother me :P
ps: dont afraid to ask me something.
As you may already know, you can install a 32bit linux distribution into a 64bit machine.
Many people are keep telling me, that they are still using a 32bit installation cause of some (not always opensource) applications which havent yet built up for 64bit kernel.
Lately i came into a same experience, when trying to access my android mobile through a usb, using google’s android platform tool: adb.
Android SDK has only a 32bit flavor.
Many distros support multi-lib (libraries) repositories that you can add to your linux box and use 32bit libs & applications inside your 64bit installation.
Of course this method isnt only for 32bit but also if you want to have different versions of the same shared object and dont want to rebuild your packages.
I find this method a lot messy and i dis-approve setups like that.
The alternative method is to use GNU’s core utility: chroot.
Change root directory (run command or interactive shell with special root directory) is extremely easy to setup. Till now perhaps you have listened that chroot is to jail exposed services, but in fact you can build your own environments for clean development or testing applications or whatever you have thought.
You dont even need to have an extra partition or even reboot your linux box!
The archlinux package manager aka pacman has a “root device” parameter so in fact you can create a new chroot archlinux enviroment like adding a new package in your existing linux box!
With the below script you will setup a new 32bit enviroment into your archlinux machine:
export ROOT=/arch32
mkdir -pv $ROOT/var/cache/pacman/pkg/ $ROOT/var/lib/pacman/
pacman -v --arch i686 -r $ROOT --noconfirm -Sy base
mount --bind /dev $ROOT/dev
mount --bind /proc $ROOT/proc
mount --bind /sys $ROOT/sys
chroot $ROOT
If you need to open gui apps from your chroot env, you must type:
xhost +
from your primarly OS.
Unfortunately thats a known fact by xmarks it self.
Is there a work around ?
The disable proxy suggestion isnt a solution, especially if you cant access internet directly.
In my research for solution and gazillions test i think i’ve figured it out.
a. Using proxychains.
- Chain your proxy
- use:
export -p LD_PRELOAD=libproxychains.so
before running firefox (or alter firefox startup script)
works even with an http proxy!
b. Using a socks proxy.
Using a socks proxy, xmarks can connect to your custom (or not) webdav server and sync!
So if you arent using an http proxy (directly), xmarks will sync.
Mercurial has an excellent built-in extension for serving your code to other via http.
hg help serve
but hasnt have any authentication method.
You can, of course, use ssh to control access to your project, a http server (or wsgi).
Searching on Extensions wiki page, i’ve came up with hg-textauth!
In a few minutes i had textauth extension in my linux box.
Lets get started with:
A.
~/.hgrc
[extensions]
textauth = /FULL_PATH/textauth.py
[textauth]
file= /FULL_PATH/text.auth
B.
touch /FULL_PATH/text.auth
hg authedit -v -c ebal
Enter password: ******
Verify password: ******
That’s it!
Now test it
C.
hg serve
And in another shell (machine/whatever):
D.
hg clone http://example.com:8000 myproject.hg
requesting all changes
http authorization required
realm: mercurial
user: ebal
password: ******
adding changesets
adding manifests
adding file changes
added 1 changesets with 4 changes to 4 files
updating to branch default
resolving manifests
getting ToDo
getting help.html
getting image.png
getting myproject.py
4 files updated, 0 files merged, 0 files removed, 0 files unresolved
This afternoon at hackerspace we’ve booted up via PXE, tinycorelinux on alix3d3!
we didnt have any usb keyboard, so we used xvkbd for screen keyboard and then we downloaded firefox, openssh-server, x11vnc & mpd. Using mpc from our laptops we were listening streamed music from alix.
And all of that in only a 500Mhz cpu with only 256Mb Ram!!!
See some pictures
I usually use find to search for files and analyze the output.
Reading the manual page i learned about nouser & nogroup test expressions.
So i’ve tried some test to find the quicker (or a better way) to remove files with find.
First, lets create a demo dir and a lot of files
# cp -ra /usr /usr.test
# chown -R 10101.10101 /usr.test
How many files do we have ?
# time find /usr.test/ -xdev | wc -l
124298
real 0m0.575s
user 0m0.243s
sys 0m0.363s
Ok, 124.298 files are a lot!
If i want to delete the entire directory via rm, the running time will be:
# time rm -rf usr.test/
real 0m5.883s
user 0m0.287s
sys 0m5.063s
5.88 seconds !
A walk through entire tree path:
# time find /usr.test/ -xdev -nouser > /dev/null
real 0m6.480s
user 0m2.763s
sys 0m3.660s
6.48 secs. It’s faster to remove them!
We now have a base to compare our results.
We will try 3 methods:
a. -delete find option
b. -exec find option
c. xargs via pipe
First Method
# time find /usr.test/ -xdev -nouser -delete
real 0m12.739s
user 0m2.826s
sys 0m9.513s
12.74 secs. Thats the twice amount of time
Second Method
# time find /usr.test -xdev -nouser -exec rm -rf {} ;
real 0m6.307s
user 0m0.253s
sys 0m5.516s
6.3 secs. Same as rm (that was expected by the way).
Third Method
# time find /usr.test/ -xdev -nouser | xargs rm -rf
real 0m4.666s
user 0m1.117s
sys 0m3.426s
4.66 secs!
So xargs is the faster way for the above methods
Yesterday evening i had the pleasure to watch my apache crashing till the entire memory of my vps server was been consumed.
I had the opportunity to see a memory leak and drink a couple of beers among good friends.
Friends that can support you (psychological) till you find the bug (is it?) and fix it.
So lets begin our journey:
My blog engine (flatpress) has a identi.ca/twitter plugin for posting entries on my blog.
I’ve connected it with my identi.ca account and i ‘ve done a little hack to add a microblogging category to separate my rss feed from my blogging rss feed (category=1)
So the main problem was(is) that the identica.png image doesnt get the correct file path from php variable.
It should be something like that:
blog/fp-plugins/identicaconnect/res/identica.png
but it seems to be:
https://balaskas.gr/blog/https://balaskas.gr/blog/blog/fp-plugins/identicaconnect/res/identica.png
That would be easy to fix, right?
That was what i thought too.
But in the process or fixing it, i saw the below error on my apache logs:
“PHP Notice: Undefined index: PATH_INFO”
I fired up my php.info page and saw that there wasnt any value for the $_SERVER[’PATH_INFO’]
In fact there wasnt any $_SERVER[’PATH_INFO’] in PHP Variables !!!
WTF ?
I was searching for an answer on google and i was noticing that my site was inaccessible.
pgrep httpd | wc -l
showed me about 200 apache threads and rising really fast.
dmesg complaint about resource and at that moment my vps crashed for the first time with a memory leak in console !!!
My previous apache installation was : httpd 2.0.64 + php-5.3.3 + suhosin-patch-5.3.3-0.9.10.patch + mod_evasive + eaccelerator-0.9.6.1 and my apache custom compilation options were:
./configure
--enable-dav
--enable-rewrite
--enable-ssl
--enable-so
--enable-proxy
--enable-headers
--enable-deflate
--enable-cache
--enable-disk-cache
my php compilation options were:
./configure
--with-zlib
--with-openssl
--with-gd
--enable-mbstring
--with-apxs2=/usr/local/apache2/bin/apxs
--with-mysql
--with-mcrypt
--with-curl
When i saw the memory leak, my first (and only) thought was: killapache.pl !
In a heartbeat, i was compiling httpd-2.2.20 + php-5.3.8 + suhosin-patch-5.3.7-0.9.10.patch + eaccelerator-0.9.6.1 + mod_evasive, i had moved my /usr/local/apache2 folder to apache2.bak and installed the newest (and hopefully most secure) version of apache & php.
I have pretty well document all of my installations process and i am keeping comments for every line in configuration files i have ever changed. So to setup up httpd 2.2.20 was in indeed a matter of minutes.
I was feeling lucky and confident.
I started apache and fired up my blog.
I was tailing error logs too.
BUM !!!!
apache had just crashed again !!!!
WTF^2 ?
How can a null php variable, crash apache with a memory leak and open about a million threads?
After debugging it, i fix it by just putting an isset function in front of $_SERVER[’PATH_INFO’] php variable !!!!
Too much trouble to fix (i didnt) the path of an image in my blog.
So my question is this:
- Is this an apache bug ?
- Is this a php bug ? or
- Is it a software bud (flatpress) ?
Some people using Google reCAPTCHA (noone knows why) on their sites, but this is messing people up - especially people with astigmatism
This is what you suppose to see and write down:
This is what i see (without my glasses - i forgot them this morning, my bad):
Error:
PAM unable to dlopen(/lib/security/pam_fprintd.so): /lib/security/pam_fprintd.so: cannot open shared object file: No such file or directory: 1 Time(s) PAM adding faulty module: /lib/security/pam_fprintd.so: 1 Time(s)
Solution:
authconfig –disablefingerprint –update
WTF, in only 40min, ssh brute force attack !
Jul 14 17:54:56 server1 sshd[1135]: Server listening on 0.0.0.0 port 22.
…
Jul 14 18:36:16 server1 sshd[2325]: Invalid user center from 70.38.23.166
thank Venema for TCP Wrapper
I believe that this is a security risk for new installations.
Ok, root cant ssh access the server.
But common!
We create a simple user to login and then su to root.
I dont want ssh daemon to be started by default, before i finished with my linux server configuration and add some security measures to prevent issues like that.
And the most significant part is that i had configured my router sshd port to a non known tcp port !!!!