Announcement

Collapse
No announcement yet.

6400/8800PI-Black some timing testing

Collapse
This is a sticky topic.
X
X
 
  • Filter
  • Time
  • Show
Clear All
new posts

  • genetix
    replied
    It would seem that the lower voltage + Windows 7 (new kernels) does some affect on error levels in the system. The memory can boot and function flawless, but in an active true work scenario of ""24/7"" (IO-)/stress this generates errors to disc specifically RAID disk with big memory caching.

    This seems to occure, if conditions:
    * Memory is lowest (MemTest) stable voltage.
    * North Bridge voltage is lowest stable.

    To correlate this error I ran system several months on the condition above and tested with 0.10 raise to memory. Seems corrected all issues there was.

    So, to conclude on above message saying "drop all voltages under windows 7 18xxx and 2xxxx kernels by 0.20v" would be in actual real life work situation safe at -0.10-(-0.14v) to keep system safe from corrupting the OS drive.


    (tests were done in hardest enviroment for disk IO there is RAID-0 on 4 drives as Software RAID Intel, 8GB of memory fully in use and 16 hours general work schedule for machine for day networked & by stress pretty near every second for compression calculations.)

    Hope this helps some of you to lower the voltage under the low end(cheap snow boards ;P ) motherboards to avoid the heat those generates when added to extremely bad conditions of cooling.

    Leave a comment:


  • genetix
    replied
    Well, I decided to separate this topic because it is very very OS specific as in ONLY some kernel versions of Windows 7 allows these kinda changes. All kernels over 16385 meaning 18xxx or 20xxx ONLY. Which atm as MS is screwing on their WUA quite hard you might wanna check that your windows 7 ACTUALLY is updating.

    Now to the point.


    the Voltages on page 1:
    * All over 2.00v setups are actually stable at -0.16-0.20v (example: 2.12v(on Gigabyte actual voltage 2.12v) would be stable at Windows 7 at 1.92v(on Gigabyte actual voltage 2.02v)) and so on.
    (As originally the voltages were calculated stable as What an Windows XP or Windows Vista was allowing and that is always 0.10-0.16v above what is the actually stable at MemTest86+ for example.)
    * Timings are the same exception that now that Windows 7 allows same stability level as on bootable MemTest86+. We can actually tweak 1066Mhz and other cool setups. So, they actually work at low latency and are not instant BSOD as they used to be with Vista setup.



    I will be probably continuing this topic on some point as this is pretty new to me too, but I have ran some testing here at 1:1(950Mhz) CL4-5-4-16 and memory interface at 1100Mhz and CL5-5-5-15 still POST, is stable & actually stable at Windows 7 as we can now use generally higher voltage to push the limits what were under Vista, but will get back to the subject later on.

    btw, damn cannot believe it's almost been a year since I created this topic.
    Last edited by genetix; 03-30-2010, 12:08 AM.

    Leave a comment:


  • genetix
    replied
    Originally posted by 4x64 View Post
    It won't be long and it will all be operating on some sort of flash or whatever new name they give it. The PCI controller is just sharing the resources of the memory from what I gather about the post. I am sure the manufacturers will do what they do they do best-- planned obsolescence. And we will just have to have those new boards and all the goodies that plug into them. Ahhh- the smell of upgrades
    Nope, not this feature. Nobody gives a good damn about how memory is remapped as long it shows correct size. No flash or even PCI-E would help. Renaming hehe, yeah, probably, but this feature has been with us so long it's already I think renamed so many times that only god knows what it's called really. PCI Hole? hahaha

    Anyway, it still gives nice speed up to large quantity memory without anything else than lose of 384MB / 4GB..

    So, as for tweak for people should just turn it off. Hell I keep thinking what would this do in Triple channel 12-24GB controller.
    Last edited by genetix; 11-15-2009, 08:22 AM.

    Leave a comment:


  • 4x64
    replied
    It won't be long and it will all be operating on some sort of flash or whatever new name they give it. The PCI controller is just sharing the resources of the memory from what I gather about the post. I am sure the manufacturers will do what they do they do best-- planned obsolescence. And we will just have to have those new boards and all the goodies that plug into them. Ahhh- the smell of upgrades

    Leave a comment:


  • genetix
    replied
    How about explaining more about the PCI Memory Remapping and ways to shut it off and consequences (if any)?
    The idea of this was to loop the remaining RAM (384MB per 4GB that is) though PCI address space. The problem is when you do this this also slows down all RAM you have and all latencies your RAM uses has to pass this array. This means when the array or loop is slower of course as it's a loop to somewhere else the latencies suffer very much on this. There's no cons or bad side (This is simply bad design from hardware world to get full 4096GB/controller to work any way they could get them to work at the time 4GB came reality) except the 384MB/4GB lose of memory in this. 8GB systems will have 7424MB 4GB systems has 3712MB of available RAM with this. With full controller it's already tight spot to start tweaking latencies. Without this feature this loses some stress from latencies and you have much better chance to tweak the full controller or even 4GBs with style of latencies which would normally be functional with single memory stick. This increases speed instant as no loop is done while disabled. I do animations, flash, some java and work with Virtual Machines daily basis which require huge amount of RAM or as much there is to get, but hell even, if I were to push every last damn software to the limits over these quantities of RAM would never be too low. no software, game, application will utilize even over 4GBs and even that's very tops.

    No Windows or Linux system suffers any lose of the feature as they did never require PCI Remap Feature in the first place and does map all the memory BIOS reports there to be and you can utilize all there as usual. This does not affect the system at anyway and as below WIKIPEDIA link this feature actually created more problems than it ever was worth.

    Wikipedia:
    http://en.wikipedia.org/wiki/PCI_hole

    This is easy to simply test for see yourself (As they say self realization is more than thousand picture). Run an 2 benchmarks (I'd prefer Rightmark Multi-threaded Memory Analizer(RMMT.EXE) with over 65536KB of RAM on each core [because this cannot be below L2/L3 cache otherwise it's speeded up by it].). First one with feature enabled second one with feature disabled. and after you can even re-tweak the latencies a bit from what you got now to improve the disabled result.
    Last edited by genetix; 11-13-2009, 03:46 PM. Reason: Sorry for novel, but took some time to prove & explain.

    Leave a comment:


  • 4x64
    replied
    Originally posted by genetix View Post
    Well, since it's all quiet lets talk about something totally screwed up from motherboard world called 'PCI Memory Remap'-feature on every motherboard there is.

    by simply turning this feature down you get +200MB/s at least on memory speed with lose of 386MB / 4 GBs of memory. This totally idiotic feature was designed to remap the existing over 3712MB of memory which in for example 8GB board 768MB total lose is hell a lot less than losing 200MB/s read speeds.

    Or does someone consider that 7424MB ain't enough from 8GB for use?
    6gbs is more than enough as of now--lol--I only use this type of memory consumption when running CAD and its is actually under 4gbs.

    How about explaining more about the PCI Memory Remapping and ways to shut it off and consequences (if any)?

    Leave a comment:


  • genetix
    replied
    Well, since it's all quiet lets talk about something totally screwed up from motherboard world called 'PCI Memory Remap'-feature on every motherboard there is.

    by simply turning this feature down you get +200MB/s at least on memory speed with lose of 386MB / 4 GBs of memory. This totally idiotic feature was designed to remap the existing over 3712MB of memory which in for example 8GB board 768MB total lose is hell a lot less than losing 200MB/s read speeds.

    Or does someone consider that 7424MB ain't enough from 8GB for use?
    Last edited by genetix; 11-11-2009, 11:53 PM.

    Leave a comment:


  • genetix
    replied
    heh, well clad some helped. Those are pretty extreme line where the blocks will go, but all tested in full memtest. So, should be good. saw G.Skill comment on 1066 or 800 Mhz choise people makes. This is kinda true, but the truth is you cannot get the memory working workable any faster than CPU/Northbridge allows the bandwidth to go.

    Of course this is different, if inside memory copy speeds would be needed. However, those speeds are in 9/10 of cases completely irrelevant to actual speed. So, 1:1 or lower ratios on AMD are always best selection specially while OCing.

    -edit-

    hmm, there is ofcourse edge cases where above 'statement' fails like 1080Mhz 5:6 with FSB450+. This would actually while overclocking on higher CL6 provide +500-900MB/s write/copy speeds and around 451-465Mhz FSB the read speed on Intel platforms would be exact same as the 1:1 ratio read speed at highest FSB, but this would be in cost of latency so in the end it would still be failure to get more actual speed.
    Last edited by genetix; 09-11-2009, 09:00 AM.

    Leave a comment:


  • 4x64
    replied
    Bump to move this thread back up to the top. It has a lot of good information and testing

    I personally had a bad experience trying to get 1066+ stable with the 8500's Pi and settled back for 800+ using my 6400's Pi, both sets are 8gb's (4 x2gb's). My 6400's work without problems and I have fine tuned them to work as well as the set of 8500's that I am RMA'ing and I had pulled my hair out on and had lossed my voice from shouting at.

    Leave a comment:


  • genetix
    replied
    Well, Actually every single one from this topic passes memory test in full including of course POSTs clean. There is an issue on some of boards while adjusting North bridge the board gets unstable for first couple reboots. This is as board adjusts the fine delays and it will correct itself.

    ------------------

    As for dima_s sorry got late respond been busy.

    You should dump timings 1081Mhz(6:5/333 strap) 450FSB 5-5-5-17-4-70-7-4 (CL-RCD-RP-RAS-RRD-RFC-WR-RTP) to your BIOS should work just fine (make sure you are using memory slots 2 and 4 as in A2 and B2 not A1 and B1 or other compinations.). However, you will need 2,20v in BIOS to keep it stable and minimum of 1,43v on north bridge. On 900Mhz setup. You can test the setup of 4-4-4-14-2-45-2 and bunch he lower values as the post here which was written as experimental when you first posted here something. tRCD and tRP same + RAS little higher will balance the memory after 2 reboots to NB should work through memory tests. Even the 4-4-3-7-3-45-2 should work, however, you will have issues later on on that setting because of other timings to suggest just adding the 14 to tRAS.

    ------------------

    -Fixed some troubleshooting for USB lags these option to stabilize it different manner the resets from begin with by South Bridge.
    -The seen a lot of DDR/DRAM VTT voltage tweaking, atm, on AM2+/AM3. This is useless area to tweak and it is none sense as far I've tested termination voltages should always stabilize the the current in half not on major even while L3 caches applies. Would understand to apply higher VTT under CPUs but this does not apply to RAM similar so disagree the whole idea. (lol, 1.4V? get ****ing serious this requires 2.8v on DDR2 you know).

    ------------------

    Your tech support haven't answered my questions on e-mails either and seems there's no proper respond to some very good questions on ASUS boards.



    Edit #01

    heh, read some comments on this topic to not to buy G.Skill again. Well, that is kinda lame perspective as memories in general actually are pretty near fastest memory you can buy, to judge something that doesn't clean POST up is understandable, but that only shows idioticy of any single person tweaking. I think this is just matter of taste G.Skill is cheap and has flaws because of it, their memory is much slower than competition in a internal settings wise, while reality is that they go lower than competition still while used correctly. Anyone, thinking otherwise can dump me an memory speeds on screens I'll compete to those any day with G.Skill memories even while not on spec.
    Last edited by genetix; 07-27-2009, 01:02 AM.

    Leave a comment:


  • dima_s
    replied
    i've opened the thread before i posted here http://gskill.us/forum/showthread.php?t=1113

    Leave a comment:


  • GSKILL TECH
    replied
    If you are having issues with your own setup, start a new thread so we can help you solve it. Otherwise no one knows you're having an issue if you are posting deep into someone else's testing thread. We also have a direct telephone technical support line and email, so those are also options you can take if you need assistance.

    Thank you
    GSKILL SUPPORT

    Leave a comment:


  • boucher91
    replied
    yup my first experience prolly the last......
    i have NEVER had this much problem with ANY other memmory.....

    Leave a comment:


  • dima_s
    replied
    ok! finally found a great solution!!! --->>> NEVER BUY G.SKILL AGAIN, because of useless tech support.

    Leave a comment:


  • dima_s
    replied
    those of you who used latency 4-4-3-14, how did you past posting? mine even doesnt eant to post with something less than 5-5-5-15?

    Leave a comment:

Working...
X