Re: An extremely large hash lookup mechanism [message #172233 is a reply to message #172232] |
Mon, 07 February 2011 13:15 |
Peter H. Coffin
Messages: 245 Registered: September 2010
Karma:
|
Senior Member |
|
|
On Mon, 7 Feb 2011 04:06:30 -0800 (PST), ram wrote:
> I have Mysql table with customerid (big int) & customer_unique_key
> (varchar )
>
> I have a php script that needs to upload customers into groups.
> The customer_unique_key will be uploaded and all the customerids
> should be entered to a new table.
>
> Initially My script was doing a per-record query to extract
> customerid and print.
> But this turned out to be toooo slow , since the upload file may have
> upto a million records.
>
> Now I modified the script to read all customer_unique_key ->
> customerid as key value pairs into an array
> This works fine and fast , but hogs the memory and crashes whenever
> the number of records crosses around 3-4 million.
>
>
> What is the best way I can implement a hash lookup ? Should I use a
> CDB library ?
No, you should use a real database. Unless you plan on essentially never
adding a customer. But the way you've framed the question (including
babbling about hashes) makes it sound like you've already "solved" your
problem, and have decided you want to use cdb. So, you might as well
have at it, find out that it's not fixing your problem, then go back to
comp.databases.mysql again, and this time start with talking about your
problem instead of asking how to implement your solution.
--
5. The artifact which is the source of my power will not be kept on the
Mountain of Despair beyond the River of Fire guarded by the Dragons
of Eternity. It will be in my safe-deposit box. The same applies to
the object which is my one weakness. --Peter Anspach "Evil Overlord"
|
|
|