Seagate IronWolf, 8 TB, NAS, Internal Hard Drive, CMR, 3.5 Inch, SATA, 6GB/s, 5,400 RPM, 256MB Cache, for RAID Network Attached Storage, 3 year Rescue Services (ST8000VN004)

£94.48
FREE Shipping

Seagate IronWolf, 8 TB, NAS, Internal Hard Drive, CMR, 3.5 Inch, SATA, 6GB/s, 5,400 RPM, 256MB Cache, for RAID Network Attached Storage, 3 year Rescue Services (ST8000VN004)

Seagate IronWolf, 8 TB, NAS, Internal Hard Drive, CMR, 3.5 Inch, SATA, 6GB/s, 5,400 RPM, 256MB Cache, for RAID Network Attached Storage, 3 year Rescue Services (ST8000VN004)

RRP: £188.96
Price: £94.48
£94.48 FREE Shipping

In stock

We accept the following payment methods

Description

I haven’t been able to form a 100% certain answer to this question. With my friend I believe we tested motherboard ports and LSI ports and he would always have these errors and in my old server I believe the disks where also connected to my LSI controller and there I had 0 issues. Probably what you have been waiting for, this issue has been fixed in a new SC61 firmware that Seagate has released. All Seagate IronWolf 10TB drives I have ever received, even the examples I bought recently came with the SC60 firmware.

AgileArray enables dual-plane balancing and RAID optimisation in multi-bay environments, with the most advanced power management possible. Seemingly this had been going on for a while but recently someone from Seagate started replying to the topic and recently it was mentioned that new firmware was now available after which, combined with a Synology update, would re-enable the write cache and fix this issue on these drives. The actual Fix!During the first scrub ZFS found some CRC errors but I believe those to have been caused by the issue earlier and that those just hadn’t been fixed yet, I was able to run the 2 scrubs mentioned above after fixing those. Working for me! I performed several scrubs and while no data was lost or corrupted, each time one or more disks would generate some amount of CRC errors just like my friend had been having! What is going on here….. LSI/Avago controller related? Per disk chance?

Maak een einde aan de kosten en complexiteit van het opslaan, verplaatsen en activeren van gegevens op schaal. This is a full article about the issue but I have also made a video about it, you can chose what to read/watch/view! I just created POOL in raid 5 with 5 disks and started to copy data. This happened in first 8 hours of drive working. Moving on and building my 8x10TB Seagate IronWolf ZFS Mirror pool like discussed in this video, all worked well and thus I started moving over my data. Sat Jan 1 21:51:17 2022] sd 0:0:6:0: [sdg] tag#0 CDB: Read(16) 88 00 00 00 00 01 9c 00 28 40 00 00 00 08 00 00For myself I am now running for about a month with the new firmware and having done lots and lots of tests during that period not a single error has occurred anymore so I believe the new SC61 firmware fixes this issue for good. Also important, I have noticed no negative side affects regarding this new firmware, speed and everything else is still great! Upgrading your own drives



  • Fruugo ID: 258392218-563234582
  • EAN: 764486781913
  • Sold by: Fruugo

Delivery & Returns

Fruugo

Address: UK
All products: Visit Fruugo Shop