Wednesday, November 28, 2007

Fun with JavaScript

This is a JavaScript I found while stumbling.

It works like this. Just search for an image in Google. Once it shows up images as the result of your search, enter the script (without quotes) in the address bar of your browser. It will rotate the images around. If you keep pressing enter again and again, the speed of rotation keeps increasing as well.

Thursday, August 9, 2007

UltraSparc T2 : Mainframe-on-a-chip?

In my previous post, I wondered whether Niagara 2 (ULtraSparc T2) could run 64 different OS instances or were they just Solaris Zones (containers). It is confirmed that it could indeed run 64 OS instances using LDom built-in technology in T2. I found this Sun blog post that demonstrates 64 instances of Solaris running in 64 T2 threads. In another post it goes further and shows Solaris and Ubuntu running simultaneously. That makes T2 one helluva multicore chip!

Wednesday, August 8, 2007

Niagara 2 i.e. UltraSparc T2

It's very sad that most of the processor news in the world is confined to x86 only. When I commented on processors last year, I mentioned Niagara chip ie UltraSparc T1. Now Sun has come out with the second version of the energy efficient Niagara 2, officially named UltraSparc T2. With 8 cores in the chip and 8 threads per core it will have a total of 64 hardware threads. The 8 threads in each core would run in an I/O multiplexed way i.e. at any time only 1 thread can run and another thread will switch in if the running thread enters I/O cycle. That means at any point of time, 8 threads will be running simultaneously in the chip. UltraSparc T1 was similar but had 4 threads per core, for a total of 32 threads.

From the webcast on Sun's page, I found the following interesting information on T2 chip:

One major difference between T2 and T1 is that T2 has a floating point unit on each core. T1 had one such unit for the entire chip that made it practically useless for floating-point intensive tasks. T2 thus seems to have taken care of that limitation. T2 threads run at 1.4 GHz.

For crypto intensive tasks there is a cryptographic processor unit on each core. Also, there are two PCI express I/O ports on the chip as well as integrated 10 Gb Ethernet.

Sun aims to sell this chip to other vendors as well if they want to use it in their servers. This is a departure from its earlier policy where it used its chips only in its servers and sold those servers.

This chip seems great for multi-threaded applications written in languages such as Java.

One thing that was funny in the webcast was that T2 was getting marketed as the fastest processor at 89.6 GHz. It was simply calculated by multiplying 64 by 1.4 GHz.
It was like saying a train with 10 coaches running at 100 miles/hr the fastest vehicle at 1000 km/hr!

A few times during the webcast it was mentioned that 64 Operating systems could run simultaneously in 64 threads due to LDom technology. I think they were talking about zones, and not really different OS's. Or is it possible to really run different OS's in that way? I think not but correct me if I am wrong.

UltraSparc T2 sure is a great chip with energy efficiency and should give good throughput for well written applications, at a very low form-factor in a data center.

Sunday, August 5, 2007

Preventing Gmail cookie stealing

There has been a news of a vulnerability from the use of cookies by email sites like Gmail at Wi-Fi hotspots. Cookies can be stolen by using sniffing softwares and entire session can be hijacked to do malicious things on the target accounts. A simple method to stop such attacks is to use SSL for the entire session, not just for login that gmail does by default. A nice add-on from CustomizeGoogle can be used for making sessions use SSL. In addition, there are many other cool features we get on installing this add-on to Firefox browser. These features can be selected from Tools menu of Firefox and includes options such as making ads invisible in gmail and google search results. Also, links to search results from Yahoo and other popular search engines can be added for the same search string in Google search.

Wednesday, July 25, 2007

Find specific files like mp3 from Google search

It seems old tip but I found this only recently. To search files like mp3 or other smaller files from Google - e.g. if you want to find mp3 for a certain song, say, "hips don't lie", do

intitle:index.of + "mp3" + "hips don't lie"

To get results having no *.htm or *.html files, you can do

intitle:index.of + "mp3" + "hips don't lie" -htm -html

The results give a lot of locations to download the files from, sites that are normally hard to find.


Monday, June 18, 2007

SysAdmin mag & How to write unmaintainable code

June's issue of SysAdmin magazine, has some interesting Q&A's on Solaris. Questions on superblock, alternative methods for patching, etc. are given along with answers. It can be read here


Found this hilarious take on unmaintainable code written some time back when it was slashdotted:

Saturday, June 16, 2007

Which "for loop" works better/faster

I was asked this question in an interview long ago and thought I would post it here.
Which of the following for loop works better assuming no special compiler play, just on programming logic. Will both code pieces execute equally fast?

1 -
for ( i=0; i <10; i++)
for (j=0; j<100; j++)
printf("hello\n");


2 -
for (j=0; j<100; j++)
for ( i=0; i <10; i++)
printf("hello\n");

Friday, June 15, 2007

How file recovery software works

I discussed about a File recovery tool in one of my earlier posts and deferred the discussion on how such tools work for some future post. Today I will discuss how file recovery software actually works.

Files in a computer have a volume table (directory table) that contains, among other things, an entry for each file in the hard disk along with the address of the location where the file is stored. When a file is deleted, a small part of the table for that entry is modified marking the space as free. The data still remain in the disk until it is overwritten at some later time. Now two possibilities arise:

1) The table entry for a deleted file is intact. In this case, it will still contain the pointer to the file. A quick reading of volume table by the file recovery software followed by looking at the location for the file may recover data if it is not overwritten by that time. This is QuickScan option in the tool in the figure of earlier post.

2) The table entry itself is overwritten, i.e. some other file entry has replaced the free'd file's entry. In this case, the file recovery software uses Advance mode (SuperScan option in the figure). In this mode, it scans the whole disk reading each block and matches the files there with entries in the table. That is the reason this option takes a long time. If the file in any block has an associated entry in the table, it means the file is still alive i.e. not deleted, and it skips to next file and its entry. If a file has no corresponding entry in the table, it means this file has been deleted and so the tool marks this as "Found".

The most important step in data recovery after a file is accidentally deleted is to make sure that the disk is immediately detached if one wants to recover the file. Otherwise, the data might get overwritten and not recoverable by such tools. It also means that just deleting a file doesn't ensure the data is gone. One needs a proper tool to erase data. Such tools overwrite the whole data in the disk with random bit patterns many times over rendering it pattern-less and non-recoverable. Hardware techniques also exist to erase data such as degaussing which basically erases data in a disc magnetically.

Thursday, June 14, 2007

Linus likes ZFS, but

Online world is abuzz with discussions on the mail that Linus Torvalds sent to lkmk.org with some seemingly incendiary anti-Sun remarks, and a more cool-headed response by the Sun CEO Jonathan Schwartz. It has sent all the Paris Hilton front page stories down to page 5 to bite the dust. Some things this all seems to imply:

- OpenSolaris has surely begun to ruffle some feathers; even Linus says OpenSolaris' ZFS is something which could make Linux to change its license. That is something!

- The only thing most Linux developers including Linus think OpenSolaris needs Linux for are drivers. Does it imply that if a user can get a machine working with OpenSolaris, there'd be no need to install Linux?

- Linux users want ZFS. Linux developers have started to realize its importance as a Filesystem, but are diverting the issue with licensing and patent issues. Why not directly talk to Sun and implement the stuff? Surely. if FreeBSD and Mac OS X can implement it, so can Linux. It could be that it is harder to port it to Linux and the developers have become lazy.

- Somehow Linus seems to know that "Linux code is _better_". Does it mean he already has peeked into OpenSolaris code and compared it with the Linux before coming to conclusion? That is interesting, and as illuminating as his assertion.

I have a great respect for Linus as is evidenced by one of my earlier posts. But mails like this are uncharacteristic of him. I even feel that it could have been rebuked as FUD-spreading if it had come from someone other, say Microsoft. Hopefully, in the future, Linux and OpenSolaris will be living at peace and users will have choice of an OS not dictated by the license.

Wednesday, June 13, 2007

ZFS flavor of the month

A week after Sun CEO Jonathan Schwartz commented that ZFS would be in Leopard, an Apple executive said, "ZFS is not happening", when questioned about ZFS's inclusion in Leopard. Without ZFS announcement in the Apple WWDC, Mac developers would be disappointed and some reporters said they felt sleepy during the keynote.

(Update : Apple has denied the executive's claims and is clarifying that ZFS will be available as a limited option in OS X. See comments on the original story for details.)

ZFS seems to be flavor of the month. While many were expecting Apple to announce it was adding ZFS to Mac OS X, it doesn't seem likely after reading the Apple executive's comment. At least not in the forthcoming release.

In other news, ZFS was reviewed very positively in an InfoWorld article. The editor reviewing the ZFS was all praise for it. "It’s not every day that the computer industry delivers the level of innovation found in Sun's ZFS. More and more advances in the science of IT are based on simply multiplying the status quo. ZFS breaks all the rules here, and it arrives in an amazingly well-thought-out and nicely implemented solution."
Ok that makes up for "late by a year review." One thing that I've observed is that though Sun says ZFS doesn't stand for Zettabyte File System anymore, most reporters still make it a point to expand ZFS that way.

Then, eWEEK gave an Excellence award to ZFS in the E-Business Foundations category. ZFS deserves many such awards and kudos. It has made a big difference in the world of File systems.


Tuesday, June 12, 2007

Sunday, June 10, 2007

Links and symlinks - Unix and Windows

Hard links in Unix are files that have different names and can possibly different directories, but they have same inode i.e. the file is stored at any one place in the hard disk. All the hard links to any file point to that location. One can delete a hard link but it won't delete the file if there is any other link to it.

Symbolic links or symlinks on the other hand are small file that contain the pointer to another file. They are different from the actual file they are pointing to. So deleting a symbolic link won't delete the actual file. The implementation of symbolic links in Unix is transparent to the user. If a user opens and edits a symbolic link, he actually is editing the file the symbolic link points to. The symbolic link remains just a pointer to the actual file.

While Windows have shortcuts that are nearest thing to symbolic links, if someone edits a shortcut file, it actually gets changed and so it is not as transparent to the user as Unix. I read somewhere that Windows Vista has introduced transparent symbolic links similar to Unix.

Saturday, June 9, 2007

Core Solaris kernel paths

Sometime back I was searching online the path for core kernel binaries for Solaris but the information was hard to find and it was not exhaustive. Finally I was able to find the information in Solaris Internals book. The paths for core Solaris kernel binaries are:

/kernel/genunix - Platform independent core kernel for non-UltraSparc based systems resides in this binary. All non-UltraSparc based systems load this genunix during boot time.

/platform/sun4u/kernel/genunix - optimized binary for UltraSparc, but it is independent of the system type. This binary is loaded during boot time only by UltraSparc systems.

/platform/{arch}/kernel/unix - Platform dependent component of the core kernel resides here. {arch} is the architecture of the system.

The other kernel modules get loaded on demand later i.e. when an application requires them. They reside under the /usr directory tree.

All these binaries contain various low level kernel services that are needed to run the system. The command to find all the kernel modules in a system is

# modinfo

It will give as output the loaded modules in a running system.

Friday, June 8, 2007

Going from JDS to CDE

I have started using CDE now for Solaris. JDS was becoming a pain with its sluggish pace. It was eating a lot of memory, too. CDE seems lightweight in this regard as compared to JDS. There are a lot of things in CDE that I wish were more JDS-like. I will try to configure and see how friendly I can make it for general pupose use. It will be used for web-browsing and mails.

The main difference between CDE and JDS when one begins using CDE after a long time with other desktops is how the minimize windows feature works, and icons on the desktop that are absent in CDE. On minimizing any window, it appears as an icon on the desktop, unlike JDS's "go to bottom panel behavior."

The responsiveness of CDE is much better than JDS though and that is what sets it apart.

Tuesday, June 5, 2007

Creating a dynamic library - example

We all use library functions in the programs we write. An example of library that is always used in Solaris and Unix like Operating systems is libc.so. But how to create a library? It is not hard. A dynamic library can be easily created as shown in the following example.

Let's say we want to create a library called libgeek.so. It will contain an example function called my_library_func() that we will use in our program. We will create a simple program called geek.c that has the function we wanted. We will compile this as a library and call it libgeek.so (library names begin with lib) :

$ cat geek.c

my_library_func()
{
printf("Inside my library function");
}

The above is a library function we wanted to create. We then compile it into a dynamic library by giving a -G option to compiler :

$ cc -o libgeek.so -G geek.c

Now, we can use the generated library libgeek.so in our programs like:

$ cat hellolibrary.c

int main()
{
my_library_func();
return 0;
}

Now, we can compile our program and tell the linker to link to the library we created for my_library_func() :

$ cc hellolibrary.c -L/home/osgeek -R/home/osgeek -lgeek

L and R tell linker the path to look up during link-time and run-time to find library libgeek.so. The library libgeek.so is used with "lib" part removed and "l" prefixed as "lgeek".

When we run this program, the output would look like:

$ a.out
Inside my library function

That's it. We created a library and used it in a program.

Sunday, June 3, 2007

Active File Recovery tool

Today, I'll talk about a tool for Windows. Under Windows, [Shift]+[Delete] deletes files without sending them to recycle bin, so we can't get it back. Similarly, if a disk is formatted, the data is lost and we can't get it back, normally. Data loss can also be a result of virus attack. Well, the data itself is not lost. We can actually recover data that is accidentally or unintentionally deleted. How a data is actually not lost, and how it can be recovered in theory will be a topic of some future post. Today I will review a product that practically recovers lost or deleted data from a PC with Windows Operating System.

This product is called Active File Recovery and it recognizes the most common types of file and filesystems to recover.

There is a demo version of the software that one can download and try. It recovers files of only upto 65 KB. The full version has no size limit on the files it can recover. I downloaded the demo version for this review to see how it works. The download was quick - a little over 2.4 MB, and installed quickly. On launch, the options menu was clear and easy to navigate and I quickly scanned my whole drive. The software detected a lot of deleted files. Unfortunately, the limit of 65 KB didn't allow me to recover (for testing purpose) some songs and movie clips I had deleted, but it did recover some small pics that were smaller than 65 KB.

One would be surprised how much data remains in the disk and is recoverable. It can be months since you deleted certain files that show up and could be recovered. Just proves that one can't delete a data and handover the hard disk to someone or sell it (Ebay sellers better be careful). One has to erase the data with a good software for a certain amount of assurance that the data can't be recovered. Even then, there is some chance of the data getting recoverd with good quality software like Active file recovery. Imagine some person getting hold of your bank accounts and passwords!

The software has other features as well. It can also recover data from memory cards that was formatted. For Windows Vista users, there's an Enterprise edition of Active File Recovery which can recover data from an unbootable system. For this they have a lightweight Windows Vista version WinPE 2.0 that boots and runs in RAM. From there, one can run Active File Recovery software to recover the data in the drives.

The recovery tool will be useful for anyone who has accidentally deleted files. A demo version can be be downloaded from this Active File Recovery site.

Update: I've received a full version of the software and have done some testing with it. Unfortunately, I don't have extra drive needed to test recovery of movies and other bigger files, but I managed to rescue some deleted photos, about 1 MB in size. Trying to recover a bigger file within a drive overwrites some of the file's header data that makes it impossible to open the file, or worse, makes the Windows explorer crash when the folder containing the file is opened.

Saturday, June 2, 2007

Firefox Add-on - Split browser

I've recently started using Split browser Add-on for Firefox browser and am greatly impressed with it. It adds value to my browsing experience. No need to switch to another tab when I want to reference something in a different page. I can look at both pages together by splitting the current browser window in any way I want - Left, Right, Top, Bottom. Once done, I can gather all the split windows to old style tabs. Tab that is currently open can also be split horizontally or vertically.

On a related note, I un-installed the Cooliris Add-on I had installed about a month ago. While it did look useful in the beginning, it was becoming too obtrusive and an annoyance, especially when there were links very near to each other. Trying to open a link would bring up a preview of another link. Also, I was using it less and less. Good-bye Cooliris, Hi, Split broswer!

Friday, June 1, 2007

How to find address of stack top : C

In technical interviews, sometimes candidates are asked how they would find the address of the top of the stack in their system by programming in C. One simple program that should work mostly is:

int main()
{
int i;
printf("Top of the stack is %p", &i);
return 0;
}

As local variables are stored on stack, this would give an approximate top of stack. There can be variations of this program that are also few-liners like above and give more accurate results. Any more example piece of code to find stack top?

Wednesday, May 30, 2007

LaMacchia Loophole

Year 1993. A 21 year-old student at MIT named David LaMacchia set up a bulletin board system called "Cynosure." It generated a lot of traffic worldwide. People used this service to download software they wanted or upload what they had. It was online for about six weeks before being taken down by the authorities. Software companies claimed that they lost one million dollars from Cynosure. Federal grand jury charged LaMacchia with 'one count of conspiring with unknown persons to violate the wire-fraud statute'. What LaMacchia did wasn't criminal conduct under the Copyright Act. The infringement was not for the purpose of commercial advantage. So, the charge was dismissed . The lawmakers had not thought that someone might engage in these types of activities with a non-financial motive. In 1997, Congress closed this loophole with NET (No Electronic Theft) Act.

Sunday, May 27, 2007

No archives (*.a ) in Solaris anymore

While discussing static libraries in one of my previous posts, I commented that libm is provided as both dynamic library ( libm.so ) as well as static archive ( libm.a ).

Well, that is not true for Solaris anymore. Solaris 10 doesn't ship with a single static library.
Doing
ls -la |grep *.a
in /usr/lib where libraries are usually present returned no results. Tried in some more directories with same result.
I don't know when static libraries were dropped from Solaris. My guess is that it was Solaris 10, but any pointers to information would be welcome.

FLV to MPEG converter

Many times we come across videos on net we want to download but can't as they are in FLV format/flash video. There is an online opensource tool to download such online videos from sites such as YouTube. The tool converts the FLV files into MPEG format online, which can then be saved to a computer. The online FLV converter is a very useful tool. I downloaded this hilarious video clip from YouTube using this online tool.



The quality of download was very good and it was fast. Try it out!

Friday, May 25, 2007

Wanted computer engineers

Found this fun advertisement on net today:



Wednesday, May 23, 2007

Clicky Web Analytics - a good web analysis tool

I've been using Clicky Web Analytics for my blog for about 6 months now and have been very satisfied. It's been a great tool to gather data on the visits to my blog. I'm using the basic free service from them which has many unique features that are not present in other services such as Google Analytics or Feedburner.


It has most of the features expected in an analytic tool and many more. I can see how many people have visited my site, at what time, from what IP address, from which country and city (can also see that on a google map), which browser and Operating System they used, which website they came from, which pages they visited, what actions were performed by them on the blog and how long they stayed on my blog. I can also see how many people have come through search and what search keywords were used.

The display for Clicky website is pretty neat. I can see the referral websites in a descending order in time for any given day. The history of all visits is saved for two weeks as I have a free membership. For paid members, the complete history to the website's use is saved, so one can see what was the pattern on a certain day many months ago.

The free utility service that I am using has a limit of max 3 on the number of websites I can submit for analysis. To get more, one has to get a paid account which is not expensive with a nominal charge per month. For a website with a lot of hits, a paid membership would be useful. The paid account, called premium account has other features like 'spy' which shows live data to the visits to your website in real time.

The website itself is very easy to navigate with good layout and provides most information on my site available with just a click. Another good thing I liked about Clicky was that the script to put into my blog was very simple and small. No other tweaks needed to my blog's source code.

The only downside of using such an analytics for blogs is that one would want to go see the data all the time. It gets almost addictive! Try it out if you haven't already or even if you've been using other analytics tools.




Saturday, May 19, 2007

Static linking : library options in command line

In my last post I asked why it's advised that library options be the last in the command line in case of static linking.

Here is the explanation:
The symbols on the command line are resolved from left to right.
Stating linking looks through the static library for "undefined" symbols when it is processed.
Now in case of

cc -lfoo hello.c

there are no undefined symbols when libfoo.a gets processed and so nothing gets extracted from it. When the object file is processed, it doesn't find any symbol and it gives an error "Undefined symbol"
If hello.c is put before -lfoo as in

cc hello.c -lfoo

there are undefined symbols when libfoo gets processed and so they get extracted. This works fine.

Dynamic linking doesn't have this issue as all symbols are available through the virtual address space of the output file.
Static libraries have other issues like bigger executable size, and lack of ABI ( application needs to be relinked with each new version of the library).
One advantage of having static libraries is that the executables linked to them are somewhat faster at runtime because all the linking occurs before loadtime. This helps in benchmarking. Math library libm is provided as a shared object (libm.so) as well as static library (archive libm.a) since benchmarking makes a heavy use of this library.

Friday, May 18, 2007

quirk of static linking

A question related to linking today.
Why is it advised to put the library options at the end of command line for compilation?

Hint: If we have a static library, say libfoo.a which we want to link to our program hello.c

cc hello.c -lfoo
rather than
cc -lfoo hello.c

-l option tells the compiler to link to library [lib]foo. Note that "lib" from libfoo is dropped and only "foo" part is given with -l.

How Nerdy are you?

Took this Nerd test and was worried I was going to score a "less nerdy" type. But the result were pleasantly surprising. It said " All hail the monstrous nerd. You are by far the SUPREME NERD GOD!!!"
Perhaps, my score was helped by a few dirty clothes in my room, and Solaris.
For now, I am in heaven!


I am nerdier than 95% of all people. Are you a nerd? Click here to find out!

Thursday, May 17, 2007

RAID Primer

Found a brief and good online paper on different types of RAID ( Redundant Array of Independent Discs ). It explains RAID concepts with a brief explanation of each type along with their pros and cons.
The paper can be read here.

Tuesday, May 15, 2007

Top 10 funniest gadgets

While surfing the net today, I stumbled upon this list of top 10 funniest gadgets.
My favorite are DVD rewinder and USB powered butt cooler. What are yours?

Microsoft threatens Linux with patents

According to a news article, Microsoft has alleged that Linux and other Open Source software violate its patents. This includes 42 by Linux kernel alone and many by OpenOffice, totalling 235 patents in all.

Looks like an open source arm-twisting effort by MS directly related to their deal with Novell last year.

More at :
CNN
CRN

Wednesday, January 3, 2007

Memory Overcommit and the OOM Killer

Linux has a feature called memory overcommit. Put simply, it means kernel allocates memory even if it doesn't have enough. This happens when a new process is created using fork(). This effectively copies the parent's address space, and so requires twice the parent process' memory once the new process (child) is created. The memory overcommit feature means that fork() always returns a success. Even if there is not enough memory to create a new child process!
The idea behind a memory overcommit feature of Linux is that the child process rarely uses all the memory allocated to it. fork() is followed by exec() which overlays the child address space with some exectutable. Once the exec() is done, the child process exits and the parent process (which goes into wait() after creation of child) resumes.
Failing to allocate enough memory when it is needed by the child results in another process being invoked. This process is called Out Of Memory (OOM) killer. The job of this process is to select a process to kill so that the memory requirements after fork() can be satisfied. Not a very desirable feature, but it is necessary to keep memory overcommit feature of Linux. This made OOM killer infamous. How to select a process to kill is tricky. It might happen that some important processes (e.g. a database) gets killed by OOM killer. Analogies like this show how serious the situation is when killer is invoked.
It seems that during 2.4, OOM killer's favourite process to kill was the Netscape browser. The browser would crash all of a sudden and you'd have no idea why.
The memory overcommit along with OOM is not an example of a good design feature, but has even made its way into AIX. With 2.6 the memory overcommit feature can be suppressed using some variables, but by default the feature is present.
Fortunately, it doesn't exist in Solaris. Solaris never used memory overcommit. First it was vfork() instead of fork() to prevent the failure of process creation. In Solaris 10, posix_spawn() is used instead of vfork() since vfork() is not MT-safe.