• Announcements

    • Eric Newbauer

      Follow SNS on Twitter!

      Get notifications via Twitter! Looking for instant updates? We're now also announcing new versions and beta programs via Twitter. Follow Studio Network Solutions on Twitter. Thanks!

All Activity

This stream auto-updates   

  1. Last week
  2. Earlier
  3. How can I batch render on SANmp?

    Thanks for the quick reply and detailed information. Much Appreciated!
  4. How can I batch render on SANmp?

    Hello, If you need for multiple computers to write to a volume simultaneously, you would probably be better off with a NAS share than a SANmp disk. NAS shares have gotten very competitive in speed with some recent improvements, and an EVO is capable of exporting storage as either. If you have SAN storage that can't do this, you could mount a SANmp volume on a workstation and share it out using a NAS protocol such as SMB also. There are multi-writer SAN protocols also, such as XSan, but they can be difficult to support. Thanks, Jason
  5. Is there ANY way to have more than one user access a drive as write? I work at a company that runs SANmp and I use 3D/GFX programs such as Blackmagic Fusion Studio and Adobe After Effects. These programs offer the ability to use the computers in your local network to aid in the rendering of heavy scenes. I often want to do batch rendering of files when I have 80 or more clips/comps that need written out. Fusion can split out the files to multiple computers running the slave software (Fusion Render Node) but it requires all slave computers to have write access to the output folder (which is on the SANmp). Same with After Effects and its "Watch Folders." You can dole out pieces to other Render-Only computers on your network but they all need write access. I think the way that SANmp works, it prevents this type of thing since only one user at a time would have write permissions to a drive. My only workaround (we are on Macs here) is to make a public folder, locally on my computer, with Write access for all. Then, copy the rendered contents to the SANmp drive. It's a little more work and not ideal. I just wanted to ask if there were any workarounds and possibility of doing this a different way in this type of environment. Thanks
  6. Faulty LD

    Hi Don, SANmp is our SAN management software. We're not sure what the hardware is, but would be happy to take a look in a remote session, if you'd like to open a support case at support.studionetworksolutions.com.
  7. Faulty LD

    Hi Jason, this is not an Evo system, it is an older SANMP. Our tech thinks it is a bad raid card.
  8. Faulty LD

    Hi Don, That's not an error message we've encountered before. Is that coming from an EVO? Please feel free to open a support case by sending email to support@studionetworksolutions.com and we will help investigate. Thanks, Jason
  9. I have 5 volumes, 3 that were purchased originally and 2 that were added about a year ago. This morning, the original 3 volumes disappeared from the volume list and there is the following fault: DG + FAULTY LD + Anyone have any idea what this means? Most of the drives in the array have orange tallies. Thanks.
  10. Starting over with new drives

    Hello! It looks like the difference you are seeing is that the new SAN volume has not been converted into a SANmp disk. This is accomplished using SANmp Admin, which allows for the SANmp Client software to ensure that only one user has write access to the SAN volume at any given time. This protects the file system from becoming corrupted due to simultaneous writes from multiple users. For an 8-disk pool, we recommend using RAID-5, which will allow the RAID to recover from a single-disk failure. The default stripe size on the EVO is set to 128. In our tests, we found this configuration to offer the best overall performance for most situations. In theory, a larger stripe size should offer better performance for large files; however, as media files become more and more massive, the stripe size used appears to have less of an impact on performance. With that said, blanket statements about performance can hardly be made, and it may still be worth testing with different stripe sizes. The specific MTU value used is not as important as that it matches on the EVO, workstation, and any switches/routers in between. The typical configuration for an MTU value of 1500 for 1GbE, and 9000 for 10GbE. Please feel free to open a support case if there any other questions or concerns. Thanks! Alan
  11. Starting over with new drives

    Other questions/comments on tuning . . . I plan to set my logical disks in RAID 6 configuration, if it makes a difference. I also am curious about the preferred Stripe Size. I am considering 1024 on that. Any reasons pro or against? What are most people using for that? Should I crank it to 4096? I am also going to finally change all my users' connections to a network MTU of 9000 instead of 1500, and change those settings in the EVO as well. Any reason not to? Does it actually help? Should that affect my choice on stripe size? or should I do a custom MTU that is divisible by base2, like 2048, 4096, or 8192? Does any of this matter at all?
  12. I have an EVO with both SAN and NAS volumes. I am replacing drives. I created the new Disk Pool successfully. I then create a Logical Disk (RAID). I add it as a Fiber Channel Target. I add it as a iSCSI target. I use the GlobalSAN initiator to connect to the logical drive. So far, so good, I think. Now is when it needs to be formatted. There seems to be no tool within the EVO to do this, and it seems you are supposed to do it from your OS. I am Mac OSX 10.10.5, and so I format in DiskUtilities. I have also done command line diskutil formatting and with the same results. (I'm not huge into command line stuff, but i figured that one out with some googling) My new SAN logical drives show up very differently than they used to. In the EVOShare browser, they appear Black with a Blue center instead of Gray/Green/Red (depending on mount status) that they used to. The old way had SANmp logo on the icon, and a space above the icon where it showed the mount R/RW status. The new ones aren't that way. They cannot be unmounted in ShareBrowser. In Disk Utilities, my old SAN Logical Disks show up with the following info Name : SNS_EVO A-EditSAN1 Media (OLD Logical Disks) Type : Disk Partition Map Scheme : SNS_partition_scheme_v1 Disk Identifier : disk3 Media Name : SNS_EVO A-EditSAN1 Media Media Type : Generic Connection Bus : iSCSI Device Tree : IODeviceTree:/ Writable : Yes Ejectable : Yes Location : External Partition Type : Core Storage Owners Enabled : No The NEW SAN Logical Disks show up with the following info: Name : SNS_EVO D-teststripe4096 Media (NEW Logical Disks) Type : Disk Partition Map Scheme : GUID Partition Table Disk Identifier : disk2 Media Name : SNS_EVO D-teststripe4096 Media Media Type : Generic Connection Bus : iSCSI Device Tree : IODeviceTree:/ Writable : Yes Ejectable : Yes Location : External Format : Mac OS Extended Owners Enabled : Yes Notably, the Partition Map Scheme and the Partition Type are quite different. What am I doing wrong here? How do I get the drives formatted in the proper fashion, assuming HFS+/non-journaled is the way to go? Any help appreciated.
  13. Is there a certain version of the OS that does not produce these errors? I don't remember getting them on 10.10, but I don't recall if these started under 10.11 or 10.12.
  14. Hello, We have noticed what seems to be a bug in newer versions of Mac OS relating to high bandwidth iSCSI transfers, especially using 10G connections. We have notified Apple, but we haven't received any indication that they intend to fix it. As a consequence, we enabled the header and data digests by default to catch these errors and ensure a reliable data stream. We don't recommend attempting your backups without error correction, but if you'd like to test the difference in speed, you could set up a test that doesn't use production data. The impact of the extra calculations for the error correction may be high or low depending on the capabilities of the workstation, so we can't make any sort of blanket statements. Thanks, Jason
  15. I have macOS and the Server.app running as a TimeMachine server on a late 2014 mini (i5, 8GB RAM) that mounts an iSCSI target on a CentOS 7.3 server running tgtd, which it uses to store TimeMachine backups. Both the mini and the storage server are located in a rack and are connected to the same 1GbE switch. There are 30 hosts that back up to this mini, which has 3 NICs (Built-in and 2x TB adapters). I do have AFP traffic going over one TB NIC and iSCSI over the other, to help overall bandwidth, but it seems that under load I get a couple types of errors. The problem is worst on Monday morning when everyone returns to the office and many backups start at once but it does seem to generate an error about once an hour as TM does its hourly backup. I've limited the number of incoming connections and I'll see if that helps next week but I'd like to get any advice to help things. Is the some way to find out how much load is caused by having header and data digests turned on? With out risking the integrity of these backups, of course. Or, to check the initiator for other problems or performance tweaks? Part of the log from the iSCSI target server (sanitized a bit): Apr 12 13:37:22 StorageServer.Corp.Company.com tgtd[4538]: tgtd: iscsi_task_queue(1627) unexpected cmd_sn (912266,912267) Apr 12 13:37:22 StorageServer.Corp.Company.com tgtd[4538]: tgtd: conn_close(140) Forcing release of tx task 0x1fbb010 719a0704 1 Apr 12 14:37:35 StorageServer.Corp.Company.com tgtd[4538]: tgtd: iscsi_task_queue(1627) unexpected cmd_sn (358418,358419) Apr 12 14:37:35 StorageServer.Corp.Company.com tgtd[4538]: tgtd: conn_close(140) Forcing release of tx task 0x2005010 bdd91c04 1 Apr 12 14:38:21 StorageServer.Corp.Company.com tgtd[4538]: tgtd: iscsi_task_queue(1627) unexpected cmd_sn (11254,11255) Apr 12 14:38:21 StorageServer.Corp.Company.com tgtd[4538]: tgtd: conn_close(140) Forcing release of tx task 0x1fac790 8a701d04 1 Apr 12 15:38:38 StorageServer.Corp.Company.com tgtd[4538]: tgtd: iscsi_rx_handler(2184) rx hdr digest error 0xf6a4e45 calc 0xbb45309 Apr 12 15:38:38 StorageServer.Corp.Company.com tgtd[4538]: tgtd: conn_close(167) Forcing release of rx task 0x1fac4d0 6e753d04 Apr 12 15:38:44 StorageServer.Corp.Company.com tgtd[4538]: tgtd: iscsi_rx_handler(2184) rx hdr digest error 0x250501c7 calc 0xc87f5524 Apr 12 15:38:44 StorageServer.Corp.Company.com tgtd[4538]: tgtd: conn_close(167) Forcing release of rx task 0x1faca50 6d893d04 Apr 12 16:37:28 StorageServer.Corp.Company.com tgtd[4538]: tgtd: iscsi_rx_handler(2184) rx hdr digest error 0x0 calc 0x7dfed407 Apr 12 16:37:28 StorageServer.Corp.Company.com tgtd[4538]: tgtd: conn_close(167) Forcing release of rx task 0x2012010 61c34804 Apr 12 17:36:08 StorageServer.Corp.Company.com tgtd[4538]: tgtd: iscsi_rx_handler(2184) rx hdr digest error 0x31f65 calc 0x9461ba36 Apr 12 17:36:08 StorageServer.Corp.Company.com tgtd[4538]: tgtd: conn_close(167) Forcing release of rx task 0x20ad010 21814e04 Apr 13 09:42:43 StorageServer.Corp.Company.com tgtd[4538]: tgtd: iscsi_rx_handler(2184) rx hdr digest error 0xbbb4c905 calc 0x85e1c3a8 Apr 13 09:42:43 StorageServer.Corp.Company.com tgtd[4538]: tgtd: conn_close(167) Forcing release of rx task 0x1ff1010 4e07fa04 Apr 13 09:46:06 StorageServer.Corp.Company.com tgtd[4538]: tgtd: iscsi_task_queue(1627) unexpected cmd_sn (55976,55977) Apr 13 10:52:02 StorageServer.Corp.Company.com tgtd[4538]: tgtd: iscsi_rx_handler(2184) rx hdr digest error 0x0 calc 0xfb93b70b Apr 13 10:52:02 StorageServer.Corp.Company.com tgtd[4538]: tgtd: conn_close(167) Forcing release of rx task 0x2133010 1fa83e05
  16. I figured out it was two issues. one system had the Admin Dongle attached so logged in SAMmp and the other system was connecting with GlobalSAN which i have now turned off. If i could i would delete this post!!!
  17. Hi, I have two systems running macOS Sierra, both connected via Fibre and ethernet. Both can connect to the EVO over Ethernet and both can connect to the EVO through SANmp over Fibre. But one machine, when using Sharebrowser, will only connect over the ethernet EVO-only connection. The other machine is fine and goes straight to'Logged in SAMmp' i have tried forcing it to login in FULL SANmp Mode but it just fail to login. Ive checked all the settings between the two machines and even tried different users in case that was a problem. i am running the latest version on Both SANmp and Sharebrowser. on both systems! Any ideas. Thanks
  18. It sounds like Xtarget may be a good solution if you want to share a Fibre-attached RAID over iSCSI. Contact us if you'd like more information!
  19. Hello, The initiator is strictly a connection mechanism, with no concept of the file system. The workstation operating system handles all file operations, but it does need reliable communication with the target disk. From what we've seen, the initiator maintains the connection to the storage without issue, but as noted in a previous reply, the storage system doesn't seem to support the error detection policy that is set by default in globalSAN, so this protective policy gets disabled. Error 36 is often indicative of trouble with the file system. If you're still able to read from the disk(s), backing up the data (and maintaining current backups) would be a good first step. Most storage systems (including our own!) do include the error detection policy as added protection for block level communication.
  20. Well, after purchasing the Globalsan iscsi Initiator, I am running into this same error -36 on just about every file I try to copy from my DroboPro. It won't even copy files to my iPhone anymore from iTunes. This is terrible, and renders this software useless to me. What's more alarming to me, is the date of the last post here. Since then, has anyone found a viable solution?
  21. Thank you for the info. It will give me a good start from my experience with Xsan. Out of interest if we decide to go with SNS San solution iSANmp instead of Xsan to save hardware for the metadata controller. How is it different then Xsan on functionality and performance over Ethernet? Also, would we have to reformat any RAIDS that are setup as mentioned above with iSCSI? Will iSCSI plus iSANmp work with a Tiger RAID that is connected to a Mac Pro over fiber and share this out over the erthernet using iSCSI? thx
  22. Hi Alex, If a single iSCSI connection is to be used, then the machine with the connection can share the mounted volume to the other machines using AFP (or SMB). The iSCSI workstation manages the data, since SAN storage is seen as local, and then shares it as it would any other local volume it owns. There is no need for Xsan, since the other machines will not have a block level (iSCSI or FC) connection to the storage. You said there is no plan to have multiple initiators, but if you change your mind and would like the other machines to have the speed benefit of a SAN connection, then some sort of SAN management will be required; otherwise, if multiple machines concurrently connected to the same block level target, they'd each assume ownership and quickly corrupt the file system. While Xsan can be made to work with iSCSI, we do offer a much simpler alternative, SANmp (or iSANmp), which does not require a metadata network, and allows for cross-platform sharing. You can read more about iSCSI sharing here.
  23. Have a few question regarding to sns iSCSI Initiator and globalSAN... I have some experiencing with Xsan so I am aware how Xsan can and will function connected to workstation over fibre. But we have received a new solution with iSCSI using s Synology RAID/NAS. The gentlemen who setup this solution mentioned there is no need to setup the new solution with a Xsan since using iSCSI works on the Block Level and is being managed with the Synology hardware. This is the setup: A Mac Pro as the controller which is configured with Apple Server with Shared Folder (AFP) and connected over Ethernet 10G to the Synology RAID. Does this make sense? Why even have a Mac Pro as a controller if the Synology is managing the Data right? Also wouldn't the Xsan boost the speed even if the clients are connected over Shared Folders? There is NO plan in the future to install a Initiator on each Mac to give direct access to the RAID as if there is a Xsan setup over Fiber. Thanks
  24. Alan, I've left the policy open (no specific host-based ACL == Allow All) but perhaps I will create a policy and implement it. Will report back on result. jy
  25. Hi Jeff, Some storage systems maintain a list of authorized iSCSI initiators, and the iSCSI name from the workstation has to be manually added and approved. If you continue to have trouble, feel free to open a support case, and we can take a closer look at the system. Thanks, Alan
  26. Hi, I'm having trouble on a previously working system. I'm attaching a mac mini (OS X 10.10.5 Yosemite) to a QNAP. I have two QNAPs a TS412U and a TS-853A. Right now I'm interested in attaching to a newly formed target on the 412. The 412 and the mac are connected via a dedicated ethernet link, they are 'pingable'. The 412 is running firmware 4.2.3 (latest stable). When I create a portal to include the 412 i don't see any targets at it's IP address. If I try to create the targets in the mac system preferences and connect to the QNAP 412 I get the following error message: 1.2.0. Error accessing kernel extension. Please try to reinstall globalSAN software. What I've done so far: I have targets on the 853 that are known and previously working. I don't see the previously working targets at the 853 from the mac mini either. The 853 is running 4.3.3.0095 So, I have removed the software, deleted the configuration, rebooted the machine, reinstalled the software rebooted the machine and have reconfigured... and I get the same behaviour and errors. help? jy
  27. Thanks. Seems like OS X won't let you grow (resize) a partition unless it was ORIGINALLY formatted as Journaled. However, most guidance for a SAN target used in video editing (like Final Cut Pro) recommend to NOT journal, as it would create so slowness. As a result I had to drop and recreate the partition on the (newly) large storage space underneath.
  1. Load more activity