Wednesday, November 28, 2007
Thursday, August 9, 2007
Wednesday, August 8, 2007
From the webcast on Sun's page, I found the following interesting information on T2 chip:
One major difference between T2 and T1 is that T2 has a floating point unit on each core. T1 had one such unit for the entire chip that made it practically useless for floating-point intensive tasks. T2 thus seems to have taken care of that limitation. T2 threads run at 1.4 GHz.
For crypto intensive tasks there is a cryptographic processor unit on each core. Also, there are two PCI express I/O ports on the chip as well as integrated 10 Gb Ethernet.
Sun aims to sell this chip to other vendors as well if they want to use it in their servers. This is a departure from its earlier policy where it used its chips only in its servers and sold those servers.
This chip seems great for multi-threaded applications written in languages such as Java.
One thing that was funny in the webcast was that T2 was getting marketed as the fastest processor at 89.6 GHz. It was simply calculated by multiplying 64 by 1.4 GHz.
It was like saying a train with 10 coaches running at 100 miles/hr the fastest vehicle at 1000 km/hr!
A few times during the webcast it was mentioned that 64 Operating systems could run simultaneously in 64 threads due to LDom technology. I think they were talking about zones, and not really different OS's. Or is it possible to really run different OS's in that way? I think not but correct me if I am wrong.
UltraSparc T2 sure is a great chip with energy efficiency and should give good throughput for well written applications, at a very low form-factor in a data center.
Sunday, August 5, 2007
Wednesday, July 25, 2007
It seems old tip but I found this only recently. To search files like mp3 or other smaller files from Google - e.g. if you want to find mp3 for a certain song, say, "hips don't lie", do
intitle:index.of + "mp3" + "hips don't lie"
To get results having no *.htm or *.html files, you can do
intitle:index.of + "mp3" + "hips don't lie" -htm -html
The results give a lot of locations to download the files from, sites that are normally hard to find.
Monday, June 18, 2007
Saturday, June 16, 2007
Which of the following for loop works better assuming no special compiler play, just on programming logic. Will both code pieces execute equally fast?
for ( i=0; i <10; i++)
for (j=0; j<100; j++)
for (j=0; j<100; j++)
for ( i=0; i <10; i++)
Friday, June 15, 2007
Files in a computer have a volume table (directory table) that contains, among other things, an entry for each file in the hard disk along with the address of the location where the file is stored. When a file is deleted, a small part of the table for that entry is modified marking the space as free. The data still remain in the disk until it is overwritten at some later time. Now two possibilities arise:
1) The table entry for a deleted file is intact. In this case, it will still contain the pointer to the file. A quick reading of volume table by the file recovery software followed by looking at the location for the file may recover data if it is not overwritten by that time. This is QuickScan option in the tool in the figure of earlier post.
2) The table entry itself is overwritten, i.e. some other file entry has replaced the free'd file's entry. In this case, the file recovery software uses Advance mode (SuperScan option in the figure). In this mode, it scans the whole disk reading each block and matches the files there with entries in the table. That is the reason this option takes a long time. If the file in any block has an associated entry in the table, it means the file is still alive i.e. not deleted, and it skips to next file and its entry. If a file has no corresponding entry in the table, it means this file has been deleted and so the tool marks this as "Found".
The most important step in data recovery after a file is accidentally deleted is to make sure that the disk is immediately detached if one wants to recover the file. Otherwise, the data might get overwritten and not recoverable by such tools. It also means that just deleting a file doesn't ensure the data is gone. One needs a proper tool to erase data. Such tools overwrite the whole data in the disk with random bit patterns many times over rendering it pattern-less and non-recoverable. Hardware techniques also exist to erase data such as degaussing which basically erases data in a disc magnetically.
Thursday, June 14, 2007
- OpenSolaris has surely begun to ruffle some feathers; even Linus says OpenSolaris' ZFS is something which could make Linux to change its license. That is something!
- The only thing most Linux developers including Linus think OpenSolaris needs Linux for are drivers. Does it imply that if a user can get a machine working with OpenSolaris, there'd be no need to install Linux?
- Linux users want ZFS. Linux developers have started to realize its importance as a Filesystem, but are diverting the issue with licensing and patent issues. Why not directly talk to Sun and implement the stuff? Surely. if FreeBSD and Mac OS X can implement it, so can Linux. It could be that it is harder to port it to Linux and the developers have become lazy.
- Somehow Linus seems to know that "Linux code is _better_". Does it mean he already has peeked into OpenSolaris code and compared it with the Linux before coming to conclusion? That is interesting, and as illuminating as his assertion.
I have a great respect for Linus as is evidenced by one of my earlier posts. But mails like this are uncharacteristic of him. I even feel that it could have been rebuked as FUD-spreading if it had come from someone other, say Microsoft. Hopefully, in the future, Linux and OpenSolaris will be living at peace and users will have choice of an OS not dictated by the license.
Wednesday, June 13, 2007
(Update : Apple has denied the executive's claims and is clarifying that ZFS will be available as a limited option in OS X. See comments on the original story for details.)
ZFS seems to be flavor of the month. While many were expecting Apple to announce it was adding ZFS to Mac OS X, it doesn't seem likely after reading the Apple executive's comment. At least not in the forthcoming release.
In other news, ZFS was reviewed very positively in an InfoWorld article. The editor reviewing the ZFS was all praise for it. "It’s not every day that the computer industry delivers the level of innovation found in Sun's ZFS. More and more advances in the science of IT are based on simply multiplying the status quo. ZFS breaks all the rules here, and it arrives in an amazingly well-thought-out and nicely implemented solution."
Ok that makes up for "late by a year review." One thing that I've observed is that though Sun says ZFS doesn't stand for Zettabyte File System anymore, most reporters still make it a point to expand ZFS that way.
Then, eWEEK gave an Excellence award to ZFS in the E-Business Foundations category. ZFS deserves many such awards and kudos. It has made a big difference in the world of File systems.
Sunday, June 10, 2007
Symbolic links or symlinks on the other hand are small file that contain the pointer to another file. They are different from the actual file they are pointing to. So deleting a symbolic link won't delete the actual file. The implementation of symbolic links in Unix is transparent to the user. If a user opens and edits a symbolic link, he actually is editing the file the symbolic link points to. The symbolic link remains just a pointer to the actual file.
While Windows have shortcuts that are nearest thing to symbolic links, if someone edits a shortcut file, it actually gets changed and so it is not as transparent to the user as Unix. I read somewhere that Windows Vista has introduced transparent symbolic links similar to Unix.
Saturday, June 9, 2007
/kernel/genunix - Platform independent core kernel for non-UltraSparc based systems resides in this binary. All non-UltraSparc based systems load this genunix during boot time.
/platform/sun4u/kernel/genunix - optimized binary for UltraSparc, but it is independent of the system type. This binary is loaded during boot time only by UltraSparc systems.
The other kernel modules get loaded on demand later i.e. when an application requires them. They reside under the /usr directory tree.
All these binaries contain various low level kernel services that are needed to run the system. The command to find all the kernel modules in a system is
It will give as output the loaded modules in a running system.
Friday, June 8, 2007
The main difference between CDE and JDS when one begins using CDE after a long time with other desktops is how the minimize windows feature works, and icons on the desktop that are absent in CDE. On minimizing any window, it appears as an icon on the desktop, unlike JDS's "go to bottom panel behavior."
The responsiveness of CDE is much better than JDS though and that is what sets it apart.
Tuesday, June 5, 2007
Let's say we want to create a library called libgeek.so. It will contain an example function called my_library_func() that we will use in our program. We will create a simple program called geek.c that has the function we wanted. We will compile this as a library and call it libgeek.so (library names begin with lib) :
$ cat geek.c
printf("Inside my library function");
The above is a library function we wanted to create. We then compile it into a dynamic library by giving a -G option to compiler :
$ cc -o libgeek.so -G geek.c
Now, we can use the generated library libgeek.so in our programs like:
$ cat hellolibrary.c
Now, we can compile our program and tell the linker to link to the library we created for my_library_func() :
$ cc hellolibrary.c -L/home/osgeek -R/home/osgeek -lgeek
L and R tell linker the path to look up during link-time and run-time to find library libgeek.so. The library libgeek.so is used with "lib" part removed and "l" prefixed as "lgeek".
When we run this program, the output would look like:
Inside my library function
That's it. We created a library and used it in a program.
Sunday, June 3, 2007
This product is called Active File Recovery and it recognizes the most common types of file and filesystems to recover.
There is a demo version of the software that one can download and try. It recovers files of only upto 65 KB. The full version has no size limit on the files it can recover. I downloaded the demo version for this review to see how it works. The download was quick - a little over 2.4 MB, and installed quickly. On launch, the options menu was clear and easy to navigate and I quickly scanned my whole drive. The software detected a lot of deleted files. Unfortunately, the limit of 65 KB didn't allow me to recover (for testing purpose) some songs and movie clips I had deleted, but it did recover some small pics that were smaller than 65 KB.
One would be surprised how much data remains in the disk and is recoverable. It can be months since you deleted certain files that show up and could be recovered. Just proves that one can't delete a data and handover the hard disk to someone or sell it (Ebay sellers better be careful). One has to erase the data with a good software for a certain amount of assurance that the data can't be recovered. Even then, there is some chance of the data getting recoverd with good quality software like Active file recovery. Imagine some person getting hold of your bank accounts and passwords!
The software has other features as well. It can also recover data from memory cards that was formatted. For Windows Vista users, there's an Enterprise edition of Active File Recovery which can recover data from an unbootable system. For this they have a lightweight Windows Vista version WinPE 2.0 that boots and runs in RAM. From there, one can run Active File Recovery software to recover the data in the drives.
The recovery tool will be useful for anyone who has accidentally deleted files. A demo version can be be downloaded from this Active File Recovery site.
Update: I've received a full version of the software and have done some testing with it. Unfortunately, I don't have extra drive needed to test recovery of movies and other bigger files, but I managed to rescue some deleted photos, about 1 MB in size. Trying to recover a bigger file within a drive overwrites some of the file's header data that makes it impossible to open the file, or worse, makes the Windows explorer crash when the folder containing the file is opened.
Saturday, June 2, 2007
On a related note, I un-installed the Cooliris Add-on I had installed about a month ago. While it did look useful in the beginning, it was becoming too obtrusive and an annoyance, especially when there were links very near to each other. Trying to open a link would bring up a preview of another link. Also, I was using it less and less. Good-bye Cooliris, Hi, Split broswer!
Friday, June 1, 2007
printf("Top of the stack is %p", &i);
As local variables are stored on stack, this would give an approximate top of stack. There can be variations of this program that are also few-liners like above and give more accurate results. Any more example piece of code to find stack top?
Wednesday, May 30, 2007
Sunday, May 27, 2007
Well, that is not true for Solaris anymore. Solaris 10 doesn't ship with a single static library.
ls -la |grep *.a
in /usr/lib where libraries are usually present returned no results. Tried in some more directories with same result.
I don't know when static libraries were dropped from Solaris. My guess is that it was Solaris 10, but any pointers to information would be welcome.
The quality of download was very good and it was fast. Try it out!
Friday, May 25, 2007
Wednesday, May 23, 2007
I've been using Clicky Web Analytics for my blog for about 6 months now and have been very satisfied. It's been a great tool to gather data on the visits to my blog. I'm using the basic free service from them which has many unique features that are not present in other services such as Google Analytics or Feedburner.
It has most of the features expected in an analytic tool and many more. I can see how many people have visited my site, at what time, from what IP address, from which country and city (can also see that on a google map), which browser and Operating System they used, which website they came from, which pages they visited, what actions were performed by them on the blog and how long they stayed on my blog. I can also see how many people have come through search and what search keywords were used.
The display for Clicky website is pretty neat. I can see the referral websites in a descending order in time for any given day. The history of all visits is saved for two weeks as I have a free membership. For paid members, the complete history to the website's use is saved, so one can see what was the pattern on a certain day many months ago.
The free utility service that I am using has a limit of max 3 on the number of websites I can submit for analysis. To get more, one has to get a paid account which is not expensive with a nominal charge per month. For a website with a lot of hits, a paid membership would be useful. The paid account, called premium account has other features like 'spy' which shows live data to the visits to your website in real time.
The website itself is very easy to navigate with good layout and provides most information on my site available with just a click. Another good thing I liked about Clicky was that the script to put into my blog was very simple and small. No other tweaks needed to my blog's source code.
The only downside of using such an analytics for blogs is that one would want to go see the data all the time. It gets almost addictive! Try it out if you haven't already or even if you've been using other analytics tools.
Saturday, May 19, 2007
In my last post I asked why it's advised that library options be the last in the command line in case of static linking.
The symbols on the command line are resolved from left to right.
Stating linking looks through the static library for "undefined" symbols when it is processed.
Now in case of
cc -lfoo hello.c
there are no undefined symbols when libfoo.a gets processed and so nothing gets extracted from it. When the object file is processed, it doesn't find any symbol and it gives an error "Undefined symbol"
If hello.c is put before -lfoo as in
cc hello.c -lfoo
there are undefined symbols when libfoo gets processed and so they get extracted. This works fine.
Dynamic linking doesn't have this issue as all symbols are available through the virtual address space of the output file.
Static libraries have other issues like bigger executable size, and lack of ABI ( application needs to be relinked with each new version of the library).
One advantage of having static libraries is that the executables linked to them are somewhat faster at runtime because all the linking occurs before loadtime. This helps in benchmarking. Math library libm is provided as a shared object (libm.so) as well as static library (archive libm.a) since benchmarking makes a heavy use of this library.
Friday, May 18, 2007
Hint: If we have a static library, say libfoo.a which we want to link to our program hello.c
cc hello.c -lfoo
-l option tells the compiler to link to library [lib]foo. Note that "lib" from libfoo is dropped and only "foo" part is given with -l.
Perhaps, my score was helped by a few dirty clothes in my room, and Solaris.
For now, I am in heaven!
Thursday, May 17, 2007
Tuesday, May 15, 2007
According to a news article, Microsoft has alleged that Linux and other Open Source software violate its patents. This includes 42 by Linux kernel alone and many by OpenOffice, totalling 235 patents in all.
Looks like an open source arm-twisting effort by MS directly related to their deal with Novell last year.
More at :
Wednesday, January 3, 2007
The idea behind a memory overcommit feature of Linux is that the child process rarely uses all the memory allocated to it. fork() is followed by exec() which overlays the child address space with some exectutable. Once the exec() is done, the child process exits and the parent process (which goes into wait() after creation of child) resumes.
Failing to allocate enough memory when it is needed by the child results in another process being invoked. This process is called Out Of Memory (OOM) killer. The job of this process is to select a process to kill so that the memory requirements after fork() can be satisfied. Not a very desirable feature, but it is necessary to keep memory overcommit feature of Linux. This made OOM killer infamous. How to select a process to kill is tricky. It might happen that some important processes (e.g. a database) gets killed by OOM killer. Analogies like this show how serious the situation is when killer is invoked.
It seems that during 2.4, OOM killer's favourite process to kill was the Netscape browser. The browser would crash all of a sudden and you'd have no idea why.
The memory overcommit along with OOM is not an example of a good design feature, but has even made its way into AIX. With 2.6 the memory overcommit feature can be suppressed using some variables, but by default the feature is present.
Fortunately, it doesn't exist in Solaris. Solaris never used memory overcommit. First it was vfork() instead of fork() to prevent the failure of process creation. In Solaris 10, posix_spawn() is used instead of vfork() since vfork() is not MT-safe.