[Ubuntu-be] Remember - IRC meeting tomorrow Thursday 28/10/2010 at 21.00 hr.

Jan Bongaerts jbongaerts at gmail.com
Fri Oct 29 17:17:16 BST 2010


Egon,
thank you VERY much for taking the time to write such a comprehensive
answer.
It does help a lot, and clarifies a lot of issues I didn't understand
before.
Thanks for such an educational insight. As a flight instructor, I know how
difficult is to bring a technical subject across to laymen.
You are doing this very well and I commend you for it.
Cheers,
Jan.

On Thu, Oct 28, 2010 at 4:45 PM, Egon Geerardyn <egon.geerardyn at gmail.com>wrote:

> Dear Jan,
>
> To answer your questions:
>
> a) the memory requirements of Linux vary a lot with what programs you use
> (and what for), but a basic Linux desktop (e.g. writing letters, some small
> spreadsheets, some e-mail, some photo's and some web browsing) is perfectly
> feasible with the amount of RAM you mention. Regarding regular Ubuntu, I
> haven't tried a really low-end setup (I did check with 512 MiB RAM and
> fairly decent Mobile Pentium 4 and quite low-end Mobile Celeron (P4
> generation)). Works really well for most people; but don't expect it to beat
> any new equipment. In any case: some variant of Ubuntu (e.g. Xubuntu or
> Lubuntu) have lower requirements than Ubuntu or will perform better on older
> hardware. This is because they have "less" features (they are just more
> optimized towards memory and processor usage). On the other hand, Linux can
> be tailored to fit your needs (add this, remove that, ...) while in Windows
> only very few people bother to slim down their installation.
>
> Overall Linux has better memory management algorithms which the user can
> tweak a little. For most Windows folk, the effeciency of an algorithm is
> measured by how much free memory you have.  That actually is a very bad
> measure, as the amount of free (or unused) memory does not really contribute
> to the performance. Not having any free memory does slow your computer
> terribly, but as long as you have a bit of free memory it should work fine.
> In layman's terms, the free memory is head room.
>
> During the Windows XP era, the comparison between Linux and Windows was
> fairly simple: the Windows memory management was prehistoric. Very little
> caching (that's a mechanism to make a computer faster) and swapping even if
> the computer doesn't need it. With Windows Vista, the algorithms used were
> really improved a lot: Vista caches a lot more data, which should make it
> faster. Windows 7 has improved little, only the overall performance of the
> system was improved, since Vista was really a resource hog.
>
> In less technical terms (for your pupils): suppose you are making your
> homework (data) on your desk (RAM space). Then you have two general
> approaches. Some people (i.e. Linux) will get the assignment and directly
> after that they will think about what they might also need so they go to the
> closet in the other room and take  (cache) their pens, some text books, a
> dictionary, a calculator, writing paper, ... out of the closet if it's not
> already on the desk. So they have most of the things they need at hand and
> they start to work. Other people (i.e. Windows XP) will take the assignment
> and sit down to think about the assignment and they don't understand a word
> in the text. They fetch the dictionary in the other room, return with the
> dictionary and work on the assignment until they want to write down
> something so they fetch paper and pens, and so on and so on. The first kind
> of people might have less space free but will have more at hand to quickly
> solve their homework; while the other people will have lots of free space on
> their desk but they will have to run a lot to get other things they need
> (and will lose a lot of time while their desk is large enough to hold a few
> extra items). In this case, as Linux and Windows 7/Vista notice that your
> free space is becoming too small to work comfortably, they will put away the
> parts of the cache they expect not to use (i.e. remnants from other
> assignments).
>
> A bit more background about memory; the actual need for memory is based on
> 2 large factors: we want a lot of storage capacity (loads of data (e.g.
> videos) to process and we want to pay as little as possible. Some memories
> are quite cheap per storage unit (e.g. hard disks, dvd-r, ...), nowadays it
> costs next to nothing to store a few gigabytes on a HD; but on the other
> hand these devices are really slow. On the other hand, electronic memories
> are really fast, you just store a very small bunch of electrons in a
> transistor (or next to one); no need for any mechanical parts. But these
> memories are really expensive to store a lot of data in (the level 1 (L1)
> cache of your processor is a very nice example). Of course some memories are
> in between both extremes (e.g. your RAM memory is a lot faster than a hard
> disk but not as expensive as those really fast memories). To get the
> positive sides (fast and a large storage for little money) from both
> extremes, we apply a little trick. When our processor needs a bit of data,
> it only looks in the L1 cache (the fast one), if we notice that the needed
> piece of data is not available we try to get it from a slower memory (L2
> cache) or even a slower memory (L3 cache; RAM memory or finally the hard
> disk). The piece of data is then copied to all the fast memories such that
> if we need that same piece of information a little bit later, we can access
> it really fast (because it will be in the L1 cache). On the other hand, when
> data goes stale in this fast memory (the fast memory is full, so the
> computer tries to free up some space for new data), the data is written back
> to the slower memory. That is the principle of caching.
>
> Swapping is more or less the opposite. You know you have a fixed amount of
> RAM available in your computer, so when you try to use more than this
> amount, the computer will grind to a halt. It has no more possibilities to
> store any data; so to circumvent this, most operating systems use a certain
> file in windows called the page file, in Linux a swap file) or even a whole
> partition (which is used the most in Linux) to store parts of the memory. So
> the computer tries to free up memory by writing it to the hard disk; but as
> I already told you: we have a lot of space on the hard disk, but it is
> extremely slow. That's why your computer becomes really slow when your RAM
> is full; your operating system will frantically try to write the memory to
> the hard disk (which you can actually hear in most cases).
>
> Regarding swapping, I've noticed that most Windows system almost always
> have something in their swap file. With Linux you can tune this behavior
> (you can tell Linux how hard he has to swap), but the default setting is
> quite decent. In most cases Linux only mostly swap space if the memory is
> getting too full, so you always retain full RAM speed. (I have tweaked my
> settings a bit, I have about 4.5GiB (out of 8) used RAM at the moment, but
> only 28 MiB  of swap used (less than what the default settings should
> give)). Windows has another characteristic in my experience: even when your
> memory is not full, Windows will use a lot of swap space (I expect about
> 1GiB of swap in my case). So Windows is more agressive to "swap out" (write
> to disk) your RAM contents while that is not necessary.
>
>
> http://www.codinghorror.com/blog/2006/09/why-does-vista-use-all-my-memory.html
>
>
>
>
> b) Defragmenting was something that during the mid-nineties was a really
> hot topic. This was because of the FAT file system; which was a rather
> "dumb" file system. I won't go into details (because it has been a few years
> since I revised the technical side of file systems). Fragmenting is a
> phenomenon that occurs when you save and delete a lot of files on a
> partition. Let's say you want to save a certain file on you disk, so you
> look at your disk and notice that at the end you don't have enough space
> left to store that file; but in between some older files, you have deleted
> other files so in total free space you have enough room to fit in the new
> file. What you do is, you fragment the new file (cut it in pieces) and put
> it in the free holes you find on your disk. When reading the file from the
> disk (or overwriting the file with a newer version); between each part of
> the file, your hard disk will have to look for the next part. With FAT this
> was rather tedious, so it took a long time, so your computer was slow when
> your disk got fragmented. The same thing happens when you add content to a
> file that has other files after it: you will have to write the new content
> in another chunk of file. With newer file systems (NTFS for Windows, EXTFS
> for Linux), the designers thougth of ways to lessen these problems. I don't
> really know about the details there or any comparison between Windows and
> Linux regarding this issue. A possible strategy is to keep some free space
> after each file, such that when you append to the file, the data written
> directly after this file. That is something that was implemented (or should
> be implemented?) in ext4fs which has quite recently become the standard file
> system in Ubuntu.
>
>
> http://geekblog.oneandoneis2.org/index.php/2006/08/17/why_doesn_t_linux_need_defragmenting
> http://www.msversus.org/microsoft-windows-performance.html
>
>
> c) Windows has no real package management so no real packages. An
> application (OS independent) generally makes use of libraries to provide
> certain functions (e.g. to draw figures on the screen, to print, ... instead
> of reinventing the wheel every time). Windows has some standard locations
> for libraries (e.g. C:\Windows\system32 if I'm not mistaken) to store these
> common parts for your programs. When a program is distributed, it has to
> know whether these libraries are already on your computer or not. So if you
> can't assume that it is there, you will have to either provide the libraries
> you need with your program/installer (or package if you want to call it that
> way). If you provide the library you can store it either in the standard
> location which is shared for all programs or put it in the directory of your
> program. E.g. if you rely on Microsoft Office 2007 to be installed in your
> program, the program should check that the office libraries are installed.
> When you distribute your program, you can either provide MS Office together
> with your software (which might be a tad expensive) or your installer can
> say to the user "I don't want to install the software unless you install MS
> Office" (and then it just closes down). Above this problem, you might get
> some problems with different versions of the same libraries.
>
> Windows can only handle 1 version of a library in the shared location. So
> suppose you have 2 programs that need a different version of a library, you
> will have to install that library in the directory of one of the programs.
> So if you're unsure whether to include a certain DLL, just include it and
> your program will work. This of course causes that if most programmers work
> that way, you will end up with a lot of different copies of the same
> library. When a bug is found in that library it is very tedious to fix all
> occurances of the library (in most cases, people don't even bother!). This
> has happened a few years back when something was wrong in some graphical
> library in Windows. A lot of programs had to be fixed one by one because of
> this.
>
> In most Linux distributions you have a package management system (dpkg for
> Ubuntu with DEB-packages and user interfaces aptitude, apt-get, synaptic,
> ...). That defines a set of rules to represent both the files of a program,
> its dependencies towards libraries or other programs and install/uninstall
> procedures. So when your program needs some library, it is easy to specify
> what library you want and even what version is good to be used. There is
> little extra effort for the programmer to specify all of this (because it is
> standard for the distribution (and in most cases also for some closely
> related distributions (e.g. Ubuntu uses the Debian system, hence
> .DEB-files)). Linux can also handle multiple versions of libraries quite
> well.
>
> The need for different versions of libraries is that newer libraries may
> add or remove features that older software depends on. It is generally
> discouraged to remove features from a library, but in some cases it has to
> be done.
>
> To summarize: Ubuntu offers one streamlined way to manage your installed
> programs so program's don't have to include the same file every time and
> there is a clean way to check everything. More so: package management will
> even notice if two programs try to overwrite the same library (which might
> cause both programs to malfunction). It allows for different library
> versions and because most libraries are only present once (some applications
> do use the strategy I described for Windows), it's easier to fix bugs. That
> is IMHO the true power of (a certain) Linux (distribution), the package
> management system: a unified installation/deinstallation/management system.
> If you want to know a bit more about this subject: DLL Hell (or dependency
> hell) is used to describe most problems related to libraries.
>
>
> d) I wouldn't say that that is the most important safetly feature, but it
> does help. Windows folk usually are used to click every next or OK button
> that pops up without reading the messag because you get flooded with those
> messages ("Are you sure you want to do this or that?", click "Next" 20 times
> to install this or that piece of software, ...). The same with UAC in Vista
> and Windows 7: it only requires you to click OK to become administrator on
> most home computers (in companies you have to enter the administrator
> credentials, which explains why in the office Windows can work). In Linux
> you have to type your password, so that's when most people think about what
> they are doing. In general, people are more reluctant to type in a password
> than to click OK. Some people may find it tedious, that's a little down side
> to this.
>
> On the other hand: Linux has always been a multi user OS (like unix).
> Windows was a single user OS in the beginning (Windows 1 till Windows Me)
> and has been adapted (in Windows NT) to handle multiple users. But some
> features from the old windows era are still present to be able to run old
> software. In Linux the security features needed in a multi user setup were
> there by design; in Windows they were built onto the existing code base
> after a while, so some security code will be sub-optimal.
>
> There actually is a lot of standardization, altough some distributions have
> little differences in the details. Though not every software from any
> distribution may work on another distribution (I'm currently trying to get a
> quite old circuit simulator for Red Hat Linux and regular Unix to work on
> Ubuntu 10.10; easy is a different ball game), but most recent programs and
> most commonly used programs are distribution independent. A lot of software
> is common to most distributions (X server, ssh server, Apache, Gnome, ...).
> This common software base has caused some problems in the past (even with
> Linux): due to a misconfiguration of OpenSSL (I thought, it was something
> SSH <http://it.slashdot.org/article.pl?sid=08/05/13/1533212&from=rss>related in any case) at Debian, cryptographic keys generated by computers
> with Debian (and therefore also Ubuntu) were very weak and easy to crack. At
> the moment that bug is fixed; but it's just an example to show that a
> failure at one point can cause a lot of trouble when that part is used in a
> lot of computers. Software is inherently prone to errors: a lot of functions
> are used by other functions, so a security issue in one low-level function
> can cause security issues in hundreds of programs.
>
> But indeed diversity is a great way to ensure security. It's indeed harder
> to find a bug that bother every single Linux computer because Linux
> computers sometimes have different architectures (all Windows computers use
> x86 or x86-64 (AMD64); Linux also has support for a lot of other platforms
> (PowerPC, ARM, Cell, ...)). Actually the same mechanism is used by nature to
> ensure the survival of species. Let's say you plant one kind of potato
> plant, all more or less the same, then when a bug comes along that is
> harmful to one plant, all plants could (and will) get infected. (That has
> already happened a few times, e.g. Great Famine<http://en.wikipedia.org/wiki/Great_Famine_%28Ireland%29>).
> When you plant different kinds of potatoes, some plants will get infected
> but others will survive. As with Linux: because of the differences, this
> will get you some security. Some servers make use of this principle by not
> using standard values for the installation.
>
> e) There is no real absolute answer to this, because the security holes in
> Windows do not exist in Linux and vice versa. But at the moment there has
> been little succes in manufacturing large scale malware for Linux, sure,
> some do exist (rootkits etc.). Hackers will mostly target Windows because it
> is easier to succeed in overpowering at least a few machines. For Linux, a
> lot of computers are servers (more security focussed, both on Linux and
> Windows) and the defaults are quite sane (or enforced to be sane). In
> Windows some defaults were/are not sane (passwords etc.), I can't say if the
> defaults in Windows 7 are all sane, because I mostly choose to set them to
> sane values anyway
>
> http://www.msversus.org/microsoft-windows-performance.html
>
> I hope this will help you a bit,
>
> Kind Regards
>
>
> Egon.
>
>
>
> e) That depends on the software you use. Some will run faster in Linux,
> some in Windows; I guess. But there is some proof that some applications run
> faster in Wine (Windows API layer for Linux and Mac) than on real Windows
> (but then on the other hand, Wine does not implement the Windows API
> entirely and is slower in a lot of cases). In XP some programs might run
> slower due to the caching differences; but in Vista and 7 performance should
> be almost the same, I think. Haven't really performed any tests.
>
> But what does make a difference is the fact that there is little to no
> malware for Linux, so you don't really slow down your computer by surfing
> the internet as is the case with Windows and you don't need a virus scanner,
> malware scanner to keep your system running.
>
>
> On Thu, Oct 28, 2010 at 11:14, Jan Bongaerts <jbongaerts at gmail.com> wrote:
>
>> Unfortunately I won't be there again.
>>
>> I'm still compiling some introsession for a school, and could use some
>> ideas on what to show
>> 1) using the live CD
>> 2) using a normally installed system.
>>
>> I would like to know the following background info, to answer some
>> probable questions..
>>
>> a) Why does Linux need so little memory to run (min 384MB if I'm not
>> mistaken), compared to Windows (min 1GB I think)?
>>
>> b) Why isn't it necessary to 'defragment' a Linux hard disk, like one
>> needs to do in Windows?
>>
>> c)Confirm that software packages in Linux are much lighter because of the
>> multi-package structure. If not, please give reason.
>>
>> d)Confirm that the most important safety feature in Linux is due to the
>> fact that you always need a password to become root, and that the second
>> most important reason is that there is little standardisation, so difficult
>> to write malware that works on all flavours of the target software.
>>
>> e) Confirm that apps usually run faster in Linux than in Windows, because
>> of the different memory management.
>>
>> Those are just some of the things I can think of now.
>> I'd love to hear feedback from the experts here.
>>
>> Regards,
>> Jan.
>>
>>
>>
>>
>> On Wed, Oct 27, 2010 at 9:27 PM, jean7491-Events-Team <jean7491 at gmail.com
>> > wrote:
>>
>>>  Hi to all,
>>>
>>> Remember - next IRC meeting  tomorrow Thursday 28/10/2010 at 21.00 hr.
>>> on #ubuntu-be -- IRC (http://webchat.freenode.net/).
>>>
>>> See agenda in wiki https://wiki.ubuntu.com/BelgianTeam/IrcMeetings
>>>
>>> --
>>> jean7491
>>> Ubuntu Belgium Events Team
>>>
>>>
>>> --
>>> ubuntu-be mailing list / mailto:ubuntu-be at lists.ubuntu.com
>>>
>>> Modify settings or unsubscribe at:
>>> https://lists.ubuntu.com/mailman/listinfo/ubuntu-be
>>>
>>
>>
>>
>> --
>> Microsoft programs are like Englishmen. They only speak Microsoft.
>>
>> --
>> ubuntu-be mailing list / mailto:ubuntu-be at lists.ubuntu.com
>>
>> Modify settings or unsubscribe at:
>> https://lists.ubuntu.com/mailman/listinfo/ubuntu-be
>>
>>
>


-- 
Microsoft programs are like Englishmen. They only speak Microsoft.
-------------- next part --------------
An HTML attachment was scrubbed...
URL: https://lists.ubuntu.com/archives/ubuntu-be/attachments/20101029/c5709a7e/attachment-0001.htm 


More information about the ubuntu-be mailing list