Jump to content
SNS Users' Forums
  • Announcements

    • Eric Newbauer

      Follow SNS on Twitter!

      Get notifications via Twitter! Looking for instant updates? We're now also announcing new versions and beta programs via Twitter. Follow Studio Network Solutions on Twitter. Thanks!

jon123

Members
  • Content count

    8
  • Joined

  • Last visited

Community Reputation

0 Neutral

About jon123

  • Rank
    Newbie
  1. I forgot I said that I would report back, so here goes... Since switching from using a zvol and zfs shareiscsi to using a sparse file and COMSTAR I've had no issues with Time Machine. It's been about 3 weeks, so it's looking good. Let's hope I didn't just jinx it.
  2. Wow time flies. I finally got around to setting this up with a sparse file today, will report back the results. Thanks again for the help.
  3. I'm aware of the "releases" of OpenSolaris, but I believe they do try to resolve critical bugs prior to a release (thus the reason 134 still hasn't released, but was planned for March?). This is usually why the releases are actually like 111b, 111 was the "non-release", but 111b was the release with some fixes without touching the 112 branch. I'll do a little more research in to 134 though and consider and upgrade, I just need to be sure I'm not going to hit any of the data loss cases, last I checked it was dedupe that was causing problems... Anyway, the OSol box is just a system I put together, it has: ASUS M2N32-SLI mobo AMD Athlon 64 X2 5000 Brisbane 2.6GHz 4GB RAM 4 Samsung EcoGreen F3 2TB HDs setup as a zpool with raid-Z. (There is another 3 drive zpool, but I am not using that for any iscsi work). The zvol I'm using is just 400GB of the large zpool. A quick test shows that this pool did about 32MB/s via SMB, even though this isn't great, it's acceptable for my purposes - but I'd still like to get to the bottom of the speed issues. UPDATE: It would seem the max throughput I'm getting to that pool is ~46MB/s (tested using dd reading from /dev/urandom writing 16,384 64k blocks for a total of 1GB of data). ZFS Compression is turned off, the only thing enabled is sharesmb. So, now I am confused.
  4. Thanks for the suggestions. I had heard some reports of data loss on snv134, so I've been waiting for an actual release (which, since Oracle took over has dropped by 4 months already, with no official word). I am already using /dev/zvol/rdsk/ though. The only thing I haven't done yet is enable jumbo frames on the OpenSolaris box, but from what I recall this should increase speeds, maybe even by 50%, but 4.8MB/s also wouldn't be impressive. I'm going to have to do some more research to see if I can't find the major speed issue here. Thanks again.
  5. I got COMSTAR up and running on snv 111. I'm not sure if this is going to fix the problems I was having yet, but... I am seeing a huge speed loss, I just checked and I'm averaging only 3.25MB/s over 1GbE, any ideas? Update: So far, it is running better since COMSTAR, the backup has been running for over 4 hours, so hopefully this works out. The speed is still disappointing though.
  6. I have not tried COMSTAR. Right now I just have "shareiscsi" turned on on a zfs zvol. This was working fine for about 2 years before 10.6.4 and/or 4.0.0 (build 204).
  7. I've been having numerous problems since updating to 10.6.4. After updating (about a month ago), I was having problems with my previous build of the initiator so I upgraded to the 4.0.0.204 build. I had some initial trouble because I had not uninstalled the 3.3 release. Eventually I manually went through and removed everything related to SNS on the system. I then did a fresh install of 4.0.0.204. I'm connecting to an iSCSI target on an OpenSolaris box running snv 111b. Now some of the problems I'm seeing (these are all new problems, things were working great on 10.6.3 and the 3.3 initiator)... The first problem I had was Time Machine backups failing. After trying to get TM to resume a few times, it would eventually get to a point where it would be stuck at "indexing backup" (not Spotlight, this was the Time Machine status message). I left it at this point for over a day at one point, just to see if it would make it past it, but it never did. At this point I decided to start fresh and wipe the partition. I tried to unmount the device and OS X complained that it couldn't. One time I tried the "force unmount" option, and this hung. I have also tried manually disconnecting from within the globalSAN preference pane (by selecting the connected disk and clicking "Disconnect"). This caused the OS to give the device removal error, but for some reason the disk doesn't appear to be properly unmounted (in the Finder). At this point, in the globalSAN preference pane the Target (shown on the left) is Red and indicates "Disconnected", but on the right the connection is Yellow, and I can no longer click "Connect". I normally have "Persistent" on since this was for Time Machne, but if I turn that off and reboot the Mac, I can then connect and go in to disk utilitiy and wipe the disk. At this point the OS seems to work with it well, but after a little while of TM running, it eventually fails, and I'm back at square one. I found a few errors in my logs: 7/9/10 Fri, Jul 9 | 12:38:37 PM kernel GLO Warning: Unsupported opcode 0x25 in iSCSIConnectionNub::ResponseDidReceived 7/9/10 Fri, Jul 9 | 12:38:37 PM kernel GLO Warning: Tail (65536 bytes) of the Data Segment (PDU 0x7c2f400) will be ignored. 7/9/10 Fri, Jul 9 | 12:38:37 PM kernel GLO Warning: Tail (65536 bytes) of the Data Segment (PDU 0x7c2f400) will be ignored. 7/9/10 Fri, Jul 9 | 12:38:37 PM kernel GLO Warning: Tail (65536 bytes) of the Data Segment (PDU 0x7c2f400) will be ignored. 7/9/10 Fri, Jul 9 | 12:38:37 PM kernel GLO Warning: Tail (65536 bytes) of the Data Segment (PDU 0x7c2f400) will be ignored. 7/9/10 Fri, Jul 9 | 12:40:37 PM kernel GLO Warning: Error 32 while receiving BHS. 7/9/10 Fri, Jul 9 | 12:40:37 PM kernel GLO Warning: Receiving thread has stopped with error 32. 7/10/10 Sat, Jul 10 | 11:45:36 AM kernel GLO Warning: Timeout detected for connection 0x8a51f00 in state 2 7/10/10 Sat, Jul 10 | 11:45:36 AM kernel GLO Warning: Error 32 while receiving BHS. 7/10/10 Sat, Jul 10 | 11:45:36 AM kernel GLO Warning: Receiving thread has stopped with error 32. 7/10/10 Sat, Jul 10 | 6:46:45 PM kernel GLO Warning: Error 32 while receiving BHS. 7/10/10 Sat, Jul 10 | 6:46:45 PM kernel GLO Warning: Receiving thread has stopped with error 32. Image below is the current state, as you can see the Finder isn't convinced that the disk is gone. And the globalSAN software seems to be a little confused as well?
  8. Have you tried it on 10.6 yet? This will certainly be holding me back from upgrading to 10.6 if it does not work, which would be most unfortunate.
×