Operating Systems > Linux and UNIX
Restarting Linux?
Fett101:
quote:Originally posted by X11 && BOB: void main wannabe.:
really, thats cool, cant wait till 2.6.
But some Linux box's have not restarted for a long time...
http://uptimes.wonko.com/account.php?op=details&hid=798
Thats a big uptime.
--- End quote ---
quote:CPU Load: 0%
Idle: 99%
--- End quote ---
Of course.. perhaps he has a client that doesn't support reporting of CPU load, but still.
From the Uprimes Project FAQ
quote:
Does a high uptime mean an OS is stable?
Not necessarily. This is a common misconception. Uptime alone is merely a measure of how long a system has been continuously running, and does not always take into account how hard that system has been working during that time. Almost any OS, if it has nothing at all to do, will run for a very long time without having any trouble. But only a very good OS on solid hardware will achieve a high uptime while being put to thorough use. We try to compensate for this by allowing clients to report CPU usage and average idle time data, but not all clients support these features yet.
--- End quote ---
foobar:
Excuse my ignorance - but what is your point?
xyle_one:
probably that the uptime means shit, because the computer isnt doing shit. not to defend the fett, but it seemed obvious that is what he was getting at
voidmain:
First of all let me say that uptimes listed on a site specifically for the purpose of showing off uptimes really don't mean anything, especially when the numbers are so easily faked. Netcraft is a little better of an indicator because uptime isn't it's primary mission and is much more difficult to fake but I have proven that I can not only fake the uptime on Netcraft but I can fake the other statistics that it gathers (OS and Web Server). So I have lost some respect for the numbers that even Netcraft holds although they are more likely to be right.
Second of all, average CPU utilization is really not an important indicator of how much the server is used and how important it is. Many important roles take very little CPU. For instance I have had several Linux machines with multi-year uptimes, but you would never see those uptimes because the machines are far behind firewalls in closets performing very important internal networking tasks.
Things like intranet servers and proxy servers for several thousand clients. These machines just sat in the closets chugging away. Now even though massive traffic is being processed, filtered, access granted/denied, the load average may only barely register, even on old machines. If your CPU usage averages above 50% or even above 25% you might want to think about putting a faster machine or analyzing other aspects of the configuration to see if there are other ways to make the machine run optimally. That is a fairly high average utilization depending on the type of work the machine is doing.
Uptime is not nearly as important as reliability but it is important. It's nice to know I don't have to schedule an upgrade and reboot of a critical machine at 2:00am because most of these things can be done on the fly without having to reboot, on Linux and most UNIX system that is. I have also had servers with close to two years of uptime that do have higher utilization averages. One of them serves many fairly high traffic web sites, imap/pop mail, streaming video/audio, DNS, and many other functions. And it's only an old Compaq server running Linux. During high traffic periods the CPU utilization can be up in the 80% range but still the overall average is under 20%. This is a very high CPU utilization average, too high. Because during that period where the CPU is averaging 80% there are many peaks of 100% where during that time the machine can not service requests as fast as a machine could that had more resources and a faster processor.
When tuning a system your goal is to get the system to do the most amount of work with the least amount of CPU utilization. Good running system will usually have a low load average. Things that can effect it are not having enough RAM which causes a lot of paging/swapping which increase CPU time. But there are *many* factors where bottle-necks can occur and cause the system not to perform well.
[ December 08, 2002: Message edited by: void main ]
DC:
quote:Originally posted by Billy Gates: Mac Commando:
I have read at many Linux sites that you never have to restart Linux, ever. Is this true.
--- End quote ---
Technically, yes - although restarting once in a while isn't too bad, certainly on a x86 user-system (in contrast to a server system). Most notably IDE harddisks aren't made for 24/7 operation week in week out for months.
On stable hardware (the OS isn't the only thing that determins hardware) the only thing to stop you from having a continuous uptime is kernel upgrading or hardware upgrading. With 2.5 (or heavy wizardry) and some hot-swapping devices you can all but eliminate that too if you want to (proc and mem are the only hardware parts that are really needed and have no hot-swapping capability available I think (I define the mainboard as "the" computer - which makes it naturally non-swappable) - although some things are better in their non-swappable form).
Do note that for some stuff (IE upgrades to applications/server software) services may need to be switched off, including the network, taking you offline.
But you could theoretically have a computer running indefinitly, that is if you don't have a problem running ancient bug-ridden software and have the ability to protect it from all real-world dangers (like power outages, fire, breakdown, the end of the universe...).
Navigation
[0] Message Index
[#] Next page
[*] Previous page
Go to full version