FUDforum
Fast Uncompromising Discussions. FUDforum will get your users talking.

Home » Imported messages » comp.lang.php » Asynchronous FTP Upload
Show: Today's Messages :: Polls :: Message Navigator
Switch to threaded view of this topic Create a new topic Submit Reply
Asynchronous FTP Upload [message #172050] Fri, 28 January 2011 15:12 Go to next message
duderion is currently offline  duderion
Messages: 3
Registered: January 2011
Karma: 0
Junior Member
hi guys,

could anyone tell me how i can handle an ftp upload to several servers
at once?

I found fb_nb_put, but i dont know how to combine 5 connections with
this.


any help would be nice :)

dude
Re: Asynchronous FTP Upload [message #172052 is a reply to message #172050] Fri, 28 January 2011 15:21 Go to previous messageGo to next message
Captain Paralytic is currently offline  Captain Paralytic
Messages: 204
Registered: September 2010
Karma: 0
Senior Member
On Jan 28, 3:12 pm, duderion <adrian.g...@googlemail.com> wrote:
> hi guys,
>
> could anyone tell me how i can handle an ftp upload to several servers
> at once?
>
> I found fb_nb_put, but i dont know how to combine 5 connections with
> this.
>
> any help would be nice :)
>
> dude

I would have thought that you would just call it with 5 separate
connection resources.
Re: Asynchronous FTP Upload [message #172053 is a reply to message #172050] Fri, 28 January 2011 15:29 Go to previous messageGo to next message
Jerry Stuckle is currently offline  Jerry Stuckle
Messages: 2598
Registered: September 2010
Karma: 0
Senior Member
On 1/28/2011 10:12 AM, duderion wrote:
> hi guys,
>
> could anyone tell me how i can handle an ftp upload to several servers
> at once?
>
> I found fb_nb_put, but i dont know how to combine 5 connections with
> this.
>
>
> any help would be nice :)
>
> dude

You'll need to open 5 different streams and start each transfer. Keep
track of the status of each transfer in an array and loop while any of
them need to continue. In the loop, continue those which have not finished.

Not sure what this is going to do though for you though, other than take
a lot of unnecessary CPU because you're effectively polling constantly.
Why don't you just upload each file individually?

--
==================
Remove the "x" from my email address
Jerry Stuckle
JDS Computer Training Corp.
jstucklex(at)attglobal(dot)net
==================
Re: Asynchronous FTP Upload [message #172055 is a reply to message #172053] Fri, 28 January 2011 15:56 Go to previous messageGo to next message
duderion is currently offline  duderion
Messages: 3
Registered: January 2011
Karma: 0
Junior Member
Hi Jerry
thanks for the quick and nice reply,

i need to do this, because i have to trasfer videos to around 500
hosts during one night. i have a 1gb upload line, and thats why i want
to run those uploads simultaniously....

On Jan 28, 4:29 pm, Jerry Stuckle <jstuck...@attglobal.net> wrote:
> On 1/28/2011 10:12 AM, duderion wrote:
>
>> hi guys,
>
>> could anyone tell me how i can handle an ftp upload to several servers
>> at once?
>
>> I found fb_nb_put, but i dont know how to combine 5 connections with
>> this.
>
>> any help would be nice :)
>
>> dude
>
> You'll need to open 5 different streams and start each transfer.  Keep
> track of the status of each transfer in an array and loop while any of
> them need to continue. In the loop, continue those which have not finished.
>
> Not sure what this is going to do though for you though, other than take
> a lot of unnecessary CPU because you're effectively polling constantly.
>   Why don't you just upload each file individually?
>
> --
> ==================
> Remove the "x" from my email address
> Jerry Stuckle
> JDS Computer Training Corp.
> jstuck...@attglobal.net
> ==================
Re: Asynchronous FTP Upload [message #172061 is a reply to message #172055] Fri, 28 January 2011 19:48 Go to previous messageGo to next message
Jerry Stuckle is currently offline  Jerry Stuckle
Messages: 2598
Registered: September 2010
Karma: 0
Senior Member
On 1/28/2011 10:56 AM, duderion wrote:
> On Jan 28, 4:29 pm, Jerry Stuckle<jstuck...@attglobal.net> wrote:
>> On 1/28/2011 10:12 AM, duderion wrote:
>>
>>> hi guys,
>>
>>> could anyone tell me how i can handle an ftp upload to several servers
>>> at once?
>>
>>> I found fb_nb_put, but i dont know how to combine 5 connections with
>>> this.
>>
>>> any help would be nice :)
>>
>>> dude
>>
>> You'll need to open 5 different streams and start each transfer. Keep
>> track of the status of each transfer in an array and loop while any of
>> them need to continue. In the loop, continue those which have not finished.
>>
>> Not sure what this is going to do though for you though, other than take
>> a lot of unnecessary CPU because you're effectively polling constantly.
>> Why don't you just upload each file individually?
>>
> Hi Jerry
> thanks for the quick and nice reply,
>
> i need to do this, because i have to trasfer videos to around 500
> hosts during one night. i have a 1gb upload line, and thats why i want
> to run those uploads simultaniously....
>
<Top posting fixed>

That doesn't mean you'll get anywhere near 1gb upload. Your limit in
this case is likely going to be disk access speed (assuming the other
hosts are replying in a timely manner, of course). And forcing the disk
to jump around to fetch data from different areas of the disk is likely
to be slower then accessing the data in a contiguous file.

The point being - even if you open 5 parallel connections, you are not
going to get 5x the speed; in fact, depending on what you're doing, you
may actually slow down the processing. And error recovery becomes much
harder.

You need to test and find out. The "sweet spot" may be anywhere from 1
to 500 parallel connections (although I highly doubt the latter :) ).
And it may vary depending on exactly which hosts you're currently
accessing and how quickly they respond.

P.S. Please don't top post. Thanks.

--
==================
Remove the "x" from my email address
Jerry Stuckle
JDS Computer Training Corp.
jstucklex(at)attglobal(dot)net
==================
Re: Asynchronous FTP Upload [message #172080 is a reply to message #172061] Sun, 30 January 2011 11:33 Go to previous messageGo to next message
duderion is currently offline  duderion
Messages: 3
Registered: January 2011
Karma: 0
Junior Member
On Jan 28, 8:48 pm, Jerry Stuckle <jstuck...@attglobal.net> wrote:
> On 1/28/2011 10:56 AM, duderion wrote:
>
>
>
>
>
>
>
>> On Jan 28, 4:29 pm, Jerry Stuckle<jstuck...@attglobal.net>  wrote:
>>> On 1/28/2011 10:12 AM, duderion wrote:
>
>>>> hi guys,
>
>>>> could anyone tell me how i can handle an ftp upload to several servers
>>>> at once?
>
>>>> I found fb_nb_put, but i dont know how to combine 5 connections with
>>>> this.
>
>>>> any help would be nice :)
>
>>>> dude
>
>>> You'll need to open 5 different streams and start each transfer.  Keep
>>> track of the status of each transfer in an array and loop while any of
>>> them need to continue. In the loop, continue those which have not finished.
>
>>> Not sure what this is going to do though for you though, other than take
>>> a lot of unnecessary CPU because you're effectively polling constantly..
>>>    Why don't you just upload each file individually?
>
>  > Hi Jerry
>  > thanks for the quick and nice reply,
>  >
>  > i need to do this, because i have to trasfer videos to around 500
>  > hosts during one night. i have a 1gb upload line, and thats why i want
>  > to run those uploads simultaniously....
>  >
> <Top posting fixed>
>
> That doesn't mean you'll get anywhere near 1gb upload.  Your limit in
> this case is likely going to be disk access speed (assuming the other
> hosts are replying in a timely manner, of course).  And forcing the disk
> to jump around to fetch data from different areas of the disk is likely
> to be slower then accessing the data in a contiguous file.
>
> The point being - even if you open 5 parallel connections, you are not
> going to get 5x the speed; in fact, depending on what you're doing, you
> may actually slow down the processing.  And error recovery becomes much
> harder.
>
> You need to test and find out.  The "sweet spot" may be anywhere from 1
> to 500 parallel connections (although I highly doubt the latter :) ).
> And it may vary depending on exactly which hosts you're currently
> accessing and how quickly they respond.
>
> P.S. Please don't top post.  Thanks.
>
> --
> ==================
> Remove the "x" from my email address
> Jerry Stuckle
> JDS Computer Training Corp.
> jstuck...@attglobal.net
> ==================

Thanks Jerry,

ill pass your useful information to my boss, who had this idea. i
guess its better to upload them in a row...

THANKS ALOT
and ill never top-post again :D

dude
Re: Asynchronous FTP Upload [message #172082 is a reply to message #172061] Sun, 30 January 2011 13:41 Go to previous messageGo to next message
Luuk is currently offline  Luuk
Messages: 329
Registered: September 2010
Karma: 0
Senior Member
On 28-01-11 20:48, Jerry Stuckle wrote:
> On 1/28/2011 10:56 AM, duderion wrote:
>> On Jan 28, 4:29 pm, Jerry Stuckle<jstuck...@attglobal.net> wrote:
>>> On 1/28/2011 10:12 AM, duderion wrote:
>>>
>>>> hi guys,
>>>
>>>> could anyone tell me how i can handle an ftp upload to several servers
>>>> at once?
>>>
>>>> I found fb_nb_put, but i dont know how to combine 5 connections with
>>>> this.
>>>
>>>> any help would be nice :)
>>>
>>>> dude
>>>
>>> You'll need to open 5 different streams and start each transfer. Keep
>>> track of the status of each transfer in an array and loop while any of
>>> them need to continue. In the loop, continue those which have not
>>> finished.
>>>
>>> Not sure what this is going to do though for you though, other than take
>>> a lot of unnecessary CPU because you're effectively polling constantly.
>>> Why don't you just upload each file individually?
>>>
>> Hi Jerry
>> thanks for the quick and nice reply,
>>
>> i need to do this, because i have to trasfer videos to around 500
>> hosts during one night. i have a 1gb upload line, and thats why i want
>> to run those uploads simultaniously....
>>
> <Top posting fixed>
>
> That doesn't mean you'll get anywhere near 1gb upload. Your limit in
> this case is likely going to be disk access speed (assuming the other
> hosts are replying in a timely manner, of course). And forcing the disk
> to jump around to fetch data from different areas of the disk is likely
> to be slower then accessing the data in a contiguous file.
>
> The point being - even if you open 5 parallel connections, you are not
> going to get 5x the speed; in fact, depending on what you're doing, you
> may actually slow down the processing. And error recovery becomes much
> harder.

You might be true, but most of it depends on the download speed at the
receiving site. If its lower then 1/5 of your uploadspeed than you
should not worry ;)

>
> You need to test and find out. The "sweet spot" may be anywhere from 1
> to 500 parallel connections (although I highly doubt the latter :) ).
> And it may vary depending on exactly which hosts you're currently
> accessing and how quickly they respond.
>
> P.S. Please don't top post. Thanks.
>


--
Luuk
Re: Asynchronous FTP Upload [message #172085 is a reply to message #172082] Sun, 30 January 2011 14:20 Go to previous messageGo to next message
Jerry Stuckle is currently offline  Jerry Stuckle
Messages: 2598
Registered: September 2010
Karma: 0
Senior Member
On 1/30/2011 8:41 AM, Luuk wrote:
> On 28-01-11 20:48, Jerry Stuckle wrote:
>> On 1/28/2011 10:56 AM, duderion wrote:
>>> On Jan 28, 4:29 pm, Jerry Stuckle<jstuck...@attglobal.net> wrote:
>>>> On 1/28/2011 10:12 AM, duderion wrote:
>>>>
>>>> > hi guys,
>>>>
>>>> > could anyone tell me how i can handle an ftp upload to several servers
>>>> > at once?
>>>>
>>>> > I found fb_nb_put, but i dont know how to combine 5 connections with
>>>> > this.
>>>>
>>>> > any help would be nice :)
>>>>
>>>> > dude
>>>>
>>>> You'll need to open 5 different streams and start each transfer. Keep
>>>> track of the status of each transfer in an array and loop while any of
>>>> them need to continue. In the loop, continue those which have not
>>>> finished.
>>>>
>>>> Not sure what this is going to do though for you though, other than take
>>>> a lot of unnecessary CPU because you're effectively polling constantly.
>>>> Why don't you just upload each file individually?
>>>>
>>> Hi Jerry
>>> thanks for the quick and nice reply,
>>>
>>> i need to do this, because i have to trasfer videos to around 500
>>> hosts during one night. i have a 1gb upload line, and thats why i want
>>> to run those uploads simultaniously....
>>>
>> <Top posting fixed>
>>
>> That doesn't mean you'll get anywhere near 1gb upload. Your limit in
>> this case is likely going to be disk access speed (assuming the other
>> hosts are replying in a timely manner, of course). And forcing the disk
>> to jump around to fetch data from different areas of the disk is likely
>> to be slower then accessing the data in a contiguous file.
>>
>> The point being - even if you open 5 parallel connections, you are not
>> going to get 5x the speed; in fact, depending on what you're doing, you
>> may actually slow down the processing. And error recovery becomes much
>> harder.
>
> You might be true, but most of it depends on the download speed at the
> receiving site. If its lower then 1/5 of your uploadspeed than you
> should not worry ;)
>


Not really. You're not going to get 1gb/sec. or even 200mb/sec. from
the disk drive, especially not continuously. So even if the download
speed on the other end is 200mb/sec, that's still not going to be a
limiting factor.

And forcing the disk to pull data from several different files on the
disk will slow disk overall disk access even more, especially if the
files are contiguous.

>>
>> You need to test and find out. The "sweet spot" may be anywhere from 1
>> to 500 parallel connections (although I highly doubt the latter :) ).
>> And it may vary depending on exactly which hosts you're currently
>> accessing and how quickly they respond.
>>
>> P.S. Please don't top post. Thanks.
>>
>
>
--
==================
Remove the "x" from my email address
Jerry Stuckle
JDS Computer Training Corp.
jstucklex(at)attglobal(dot)net
==================
Re: Asynchronous FTP Upload [message #172086 is a reply to message #172085] Sun, 30 January 2011 14:48 Go to previous messageGo to next message
Luuk is currently offline  Luuk
Messages: 329
Registered: September 2010
Karma: 0
Senior Member
On 30-01-11 15:20, Jerry Stuckle wrote:
> On 1/30/2011 8:41 AM, Luuk wrote:
>> On 28-01-11 20:48, Jerry Stuckle wrote:
>>> On 1/28/2011 10:56 AM, duderion wrote:
>>>> On Jan 28, 4:29 pm, Jerry Stuckle<jstuck...@attglobal.net> wrote:
>>>> > On 1/28/2011 10:12 AM, duderion wrote:
>>>> >
>>>> >> hi guys,
>>>> >
>>>> >> could anyone tell me how i can handle an ftp upload to several
>>>> >> servers
>>>> >> at once?
>>>> >
>>>> >> I found fb_nb_put, but i dont know how to combine 5 connections with
>>>> >> this.
>>>> >
>>>> >> any help would be nice :)
>>>> >
>>>> >> dude
>>>> >
>>>> > You'll need to open 5 different streams and start each transfer. Keep
>>>> > track of the status of each transfer in an array and loop while any of
>>>> > them need to continue. In the loop, continue those which have not
>>>> > finished.
>>>> >
>>>> > Not sure what this is going to do though for you though, other than
>>>> > take
>>>> > a lot of unnecessary CPU because you're effectively polling
>>>> > constantly.
>>>> > Why don't you just upload each file individually?
>>>> >
>>>> Hi Jerry
>>>> thanks for the quick and nice reply,
>>>>
>>>> i need to do this, because i have to trasfer videos to around 500
>>>> hosts during one night. i have a 1gb upload line, and thats why i want
>>>> to run those uploads simultaniously....
>>>>
>>> <Top posting fixed>
>>>
>>> That doesn't mean you'll get anywhere near 1gb upload. Your limit in
>>> this case is likely going to be disk access speed (assuming the other
>>> hosts are replying in a timely manner, of course). And forcing the disk
>>> to jump around to fetch data from different areas of the disk is likely
>>> to be slower then accessing the data in a contiguous file.
>>>
>>> The point being - even if you open 5 parallel connections, you are not
>>> going to get 5x the speed; in fact, depending on what you're doing, you
>>> may actually slow down the processing. And error recovery becomes much
>>> harder.
>>
>> You might be true, but most of it depends on the download speed at the
>> receiving site. If its lower then 1/5 of your uploadspeed than you
>> should not worry ;)
>>
>
>
> Not really. You're not going to get 1gb/sec. or even 200mb/sec. from
> the disk drive, especially not continuously. So even if the download
> speed on the other end is 200mb/sec, that's still not going to be a
> limiting factor.
>
> And forcing the disk to pull data from several different files on the
> disk will slow disk overall disk access even more, especially if the
> files are contiguous.

But if the files are send to 500 hosts, the file might be in cache, if
enough memory is available, which should speed up disk-access again.. ;)

>
>>>
>>> You need to test and find out. The "sweet spot" may be anywhere from 1
>>> to 500 parallel connections (although I highly doubt the latter :) ).
>>> And it may vary depending on exactly which hosts you're currently
>>> accessing and how quickly they respond.
>>>
>>> P.S. Please don't top post. Thanks.
>>>
>>
>>


--
Luuk
Re: Asynchronous FTP Upload [message #172088 is a reply to message #172086] Sun, 30 January 2011 16:02 Go to previous messageGo to next message
Peter H. Coffin is currently offline  Peter H. Coffin
Messages: 245
Registered: September 2010
Karma: 0
Senior Member
On Sun, 30 Jan 2011 15:48:29 +0100, Luuk wrote:
> On 30-01-11 15:20, Jerry Stuckle wrote:
>> Not really. You're not going to get 1gb/sec. or even 200mb/sec. from
>> the disk drive, especially not continuously. So even if the download
>> speed on the other end is 200mb/sec, that's still not going to be a
>> limiting factor.
>>
>> And forcing the disk to pull data from several different files on the
>> disk will slow disk overall disk access even more, especially if the
>> files are contiguous.
>
> But if the files are send to 500 hosts, the file might be in cache, if
> enough memory is available, which should speed up disk-access again.. ;)

That'd be likely true for for a Very Large system, but I'd not bet on it
for the "hundreds of megabytes" original situation unles it's a
completely dedicated system/cache. There's still other processes going
on that are going to end up with their own stuff.

--
If any foreign minister begins to defend to the death a "peace
conference," you can be sure his government has already placed its
orders for new battleships and airplanes. -Joseph Stalin
Re: Asynchronous FTP Upload [message #172090 is a reply to message #172086] Sun, 30 January 2011 17:04 Go to previous messageGo to next message
The Natural Philosoph is currently offline  The Natural Philosoph
Messages: 993
Registered: September 2010
Karma: 0
Senior Member
Luuk wrote:
> On 30-01-11 15:20, Jerry Stuckle wrote:

>> And forcing the disk to pull data from several different files on the
>> disk will slow disk overall disk access even more, especially if the
>> files are contiguous.
>
> But if the files are send to 500 hosts, the file might be in cache, if
> enough memory is available, which should speed up disk-access again.. ;)

Don't confuse jerry with difficult facts. Its not fair.
Re: Asynchronous FTP Upload [message #172091 is a reply to message #172088] Sun, 30 January 2011 17:06 Go to previous messageGo to next message
The Natural Philosoph is currently offline  The Natural Philosoph
Messages: 993
Registered: September 2010
Karma: 0
Senior Member
Peter H. Coffin wrote:
> On Sun, 30 Jan 2011 15:48:29 +0100, Luuk wrote:
>> On 30-01-11 15:20, Jerry Stuckle wrote:
>>> Not really. You're not going to get 1gb/sec. or even 200mb/sec. from
>>> the disk drive, especially not continuously. So even if the download
>>> speed on the other end is 200mb/sec, that's still not going to be a
>>> limiting factor.
>>>
>>> And forcing the disk to pull data from several different files on the
>>> disk will slow disk overall disk access even more, especially if the
>>> files are contiguous.
>> But if the files are send to 500 hosts, the file might be in cache, if
>> enough memory is available, which should speed up disk-access again.. ;)
>
> That'd be likely true for for a Very Large system, but I'd not bet on it
> for the "hundreds of megabytes" original situation unles it's a
> completely dedicated system/cache. There's still other processes going
> on that are going to end up with their own stuff.
>
of course it will be cached if there is adequate memory to do it. If
there isn't, add more.

At est on Linux EVERYTHING is cached to the memory limit: Only if a
process needs more real physical ram than is available, will the
existing disk cache buffers be flushed.
Re: Asynchronous FTP Upload [message #172113 is a reply to message #172091] Mon, 31 January 2011 01:07 Go to previous messageGo to next message
Peter H. Coffin is currently offline  Peter H. Coffin
Messages: 245
Registered: September 2010
Karma: 0
Senior Member
On Sun, 30 Jan 2011 17:06:40 +0000, The Natural Philosopher wrote:
> Peter H. Coffin wrote:
>> On Sun, 30 Jan 2011 15:48:29 +0100, Luuk wrote:
>>> On 30-01-11 15:20, Jerry Stuckle wrote:
>>>> Not really. You're not going to get 1gb/sec. or even 200mb/sec. from
>>>> the disk drive, especially not continuously. So even if the download
>>>> speed on the other end is 200mb/sec, that's still not going to be a
>>>> limiting factor.
>>>>
>>>> And forcing the disk to pull data from several different files on the
>>>> disk will slow disk overall disk access even more, especially if the
>>>> files are contiguous.
>>> But if the files are send to 500 hosts, the file might be in cache, if
>>> enough memory is available, which should speed up disk-access again.. ;)
>>
>> That'd be likely true for for a Very Large system, but I'd not bet on it
>> for the "hundreds of megabytes" original situation unles it's a
>> completely dedicated system/cache. There's still other processes going
>> on that are going to end up with their own stuff.
>>
> of course it will be cached if there is adequate memory to do it. If
> there isn't, add more.

Sometimes that's a solution.

> At est on Linux EVERYTHING is cached to the memory limit: Only if a
> process needs more real physical ram than is available, will the
> existing disk cache buffers be flushed.

That's kinda the point. EVERYTHING gets cached, without much attention
being paid to what's going to be needed again and when. (At least, that
was the situation when I last looked at it; there wasn't any way to tell
the OS "This file is more important to keep in cache than that one.")
Which on a busy system means you have a very FULL cache, but not
necessarily that the right parts of a Large File are going to be there.
The next bit needed to send to a download may well have already been
purged because some log file wanted its space in the cache.

writing a dedicated downloader app that can allocate that much memory
for the whole file, then share that memory explicitly with many little
clones of itself will ensure that purging needed bits doesn't happen.
And a tool like that can probably come very, very close to saturating
the outbound link, especially if there's something that's being semi
intelligent about managing the spawed processes. For example, it could
keep track of the average speeds to various sites and when it sees that
it has X kbps left between expected link speed and what's currently
being sent, and it can then select another site to send to based on it
being the highest historical rate that fits into the remaining bandwidth
window.

But this is, I think getting a little too far into the design aspect,
though I think it's probably still within PHP's capacity.

--
"It's 106 light-years to Chicago, we've got a full chamber of anti-
matter, a half a pack of cigarettes, it's dark, and we're wearing
visors."
"Engage."
Re: Asynchronous FTP Upload [message #172128 is a reply to message #172113] Mon, 31 January 2011 12:29 Go to previous messageGo to next message
The Natural Philosoph is currently offline  The Natural Philosoph
Messages: 993
Registered: September 2010
Karma: 0
Senior Member
Peter H. Coffin wrote:
> On Sun, 30 Jan 2011 17:06:40 +0000, The Natural Philosopher wrote:
>> Peter H. Coffin wrote:
>>> On Sun, 30 Jan 2011 15:48:29 +0100, Luuk wrote:
>>>> On 30-01-11 15:20, Jerry Stuckle wrote:
>>>> > Not really. You're not going to get 1gb/sec. or even 200mb/sec. from
>>>> > the disk drive, especially not continuously. So even if the download
>>>> > speed on the other end is 200mb/sec, that's still not going to be a
>>>> > limiting factor.
>>>> >
>>>> > And forcing the disk to pull data from several different files on the
>>>> > disk will slow disk overall disk access even more, especially if the
>>>> > files are contiguous.
>>>> But if the files are send to 500 hosts, the file might be in cache, if
>>>> enough memory is available, which should speed up disk-access again.. ;)
>>> That'd be likely true for for a Very Large system, but I'd not bet on it
>>> for the "hundreds of megabytes" original situation unles it's a
>>> completely dedicated system/cache. There's still other processes going
>>> on that are going to end up with their own stuff.
>>>
>> of course it will be cached if there is adequate memory to do it. If
>> there isn't, add more.
>
> Sometimes that's a solution.
>
>> At est on Linux EVERYTHING is cached to the memory limit: Only if a
>> process needs more real physical ram than is available, will the
>> existing disk cache buffers be flushed.
>
> That's kinda the point. EVERYTHING gets cached, without much attention
> being paid to what's going to be needed again and when. (At least, that
> was the situation when I last looked at it; there wasn't any way to tell
> the OS "This file is more important to keep in cache than that one.")

The algorithms are fairly smart though. LRU and all that. If you cant
cache the file, at least cache the directory.

But n this case if the SAME file is being pushed multiple times it is
almost guaranteed to be in cache.




> Which on a busy system means you have a very FULL cache, but not
> necessarily that the right parts of a Large File are going to be there.
> The next bit needed to send to a download may well have already been
> purged because some log file wanted its space in the cache.

Not if there is more than a minimum amount of memory.

the dirty buffers will tend to be cleaned by sorting them for most
efficient disk access, and clean buffers can be simply freed up with no
disk access at all.

Whatever is then re-cached will tend to be what was last used, and what
is hardest to re-load in terms of disk access time.

If the system is actually being used by live users on most or all
processes, that's very close to optimal: If you are short of physical
RAM then someone is going to suffer: Your argument is then about making
sure its not YOUR application, rather than someone elses.





>
> writing a dedicated downloader app that can allocate that much memory
> for the whole file,

Does not ensure its actually in RAM either.

The OS is quite capable of paging out such RAM if another app needs it.

I think you are somehow not quite understanding how a modern OS uses
memory. In general you cannot hard allocate RAM at all. Only kernel
processes have that privilege, IIRC.


This is one of the reasons why RTOS designs exist: If you need to
guarantee response times you have to return to a more simplified OS
design, running very specific code.





> then share that memory explicitly with many little
> clones of itself will ensure that purging needed bits doesn't happen.

It wont..

> And a tool like that can probably come very, very close to saturating
> the outbound link, especially if there's something that's being semi
> intelligent about managing the spawed processes. For example, it could
> keep track of the average speeds to various sites and when it sees that
> it has X kbps left between expected link speed and what's currently
> being sent, and it can then select another site to send to based on it
> being the highest historical rate that fits into the remaining bandwidth
> window.
>
> But this is, I think getting a little too far into the design aspect,
> though I think it's probably still within PHP's capacity.
>

No, its completely outside of it actually.

Its well into device driver or privileged daemon territory.
Re: Asynchronous FTP Upload [message #172130 is a reply to message #172090] Mon, 31 January 2011 12:38 Go to previous messageGo to next message
bill is currently offline  bill
Messages: 310
Registered: October 2010
Karma: 0
Senior Member
On 1/30/2011 12:04 PM, The Natural Philosopher wrote:
> Luuk wrote:
>> On 30-01-11 15:20, Jerry Stuckle wrote:
>
>>> And forcing the disk to pull data from several different files
>>> on the
>>> disk will slow disk overall disk access even more, especially
>>> if the
>>> files are contiguous.
>>
>> But if the files are send to 500 hosts, the file might be in
>> cache, if
>> enough memory is available, which should speed up disk-access
>> again.. ;)
>
> Don't confuse jerry with difficult facts. Its not fair.

NP, You just had to do it didn't you.
Please take your medicine again, you were doing so well.

bill
Re: Asynchronous FTP Upload [message #172193 is a reply to message #172055] Thu, 03 February 2011 19:05 Go to previous message
Jo Schulze is currently offline  Jo Schulze
Messages: 15
Registered: January 2011
Karma: 0
Junior Member
duderion wrote:

> i need to do this, because i have to trasfer videos to around 500
> hosts during one night. i have a 1gb upload line, and thats why i want
> to run those uploads simultaniously...

Sounds like a broken design to me.
Besides I can't see any PHP question here.

> ill pass your useful information to my boss, who had this idea.
That explains something. Go tell your boss to stick to his / her
Powerpoint stuff and don't annoy others with braindead ideas.
  Switch to threaded view of this topic Create a new topic Submit Reply
Previous Topic: help to debug a simple php preg_replace
Next Topic: REQ: Looking for a script program called paCheckbook
Goto Forum:
  

-=] Back to Top [=-
[ Syndicate this forum (XML) ] [ RSS ]

Current Time: Fri Sep 20 15:28:16 GMT 2024

Total time taken to generate the page: 0.02111 seconds