Re: Asynchronous FTP Upload [message #172091 is a reply to message #172088] |
Sun, 30 January 2011 17:06 |
The Natural Philosoph
Messages: 993 Registered: September 2010
Karma:
|
Senior Member |
|
|
Peter H. Coffin wrote:
> On Sun, 30 Jan 2011 15:48:29 +0100, Luuk wrote:
>> On 30-01-11 15:20, Jerry Stuckle wrote:
>>> Not really. You're not going to get 1gb/sec. or even 200mb/sec. from
>>> the disk drive, especially not continuously. So even if the download
>>> speed on the other end is 200mb/sec, that's still not going to be a
>>> limiting factor.
>>>
>>> And forcing the disk to pull data from several different files on the
>>> disk will slow disk overall disk access even more, especially if the
>>> files are contiguous.
>> But if the files are send to 500 hosts, the file might be in cache, if
>> enough memory is available, which should speed up disk-access again.. ;)
>
> That'd be likely true for for a Very Large system, but I'd not bet on it
> for the "hundreds of megabytes" original situation unles it's a
> completely dedicated system/cache. There's still other processes going
> on that are going to end up with their own stuff.
>
of course it will be cached if there is adequate memory to do it. If
there isn't, add more.
At est on Linux EVERYTHING is cached to the memory limit: Only if a
process needs more real physical ram than is available, will the
existing disk cache buffers be flushed.
|
|
|