Friday, December 30, 2011

How to take Advantage of your connection speed and NIC

First, "What is a NIC?" you might ask yourself. A NIC is a Network Interface Controller. Pretty much it's the device you stick your CAT-3(Ethernet) cable into. This device can be altered as well as some software based settings implemented by the OS(referring to the windows operating system.) These option if set correctly to your system can become very beneficial to having better gameplay and taking advantage of your ISP. Okay, time to get to the point. You can either a.(do all this by using the registry editor and command prompt in windows.) or b. use www.speedguide.net I will do this by using the TCP Optimizer. Okay lets go through each setting and select the best setting for you. Connection speed: This should be set to your max download rate. So if you download at 256 kilobytes a second, it would be 2 Mbps. Here is a list of speeds: 128 KBps = 1 Mbps 256 KBps = 2 Mbps 512 KBps = 4 Mbps 1 MBps = 8 Mbps 2 MBps = 16 Mbps The calculation to find your exact speed is you take your highest download speed you've ever reached and multiply it times 8(assuming it was in megabytes) and select the closest number on the slider. You can also use this site to help you with any bit/byte calculations. www.matisse.net Now, select your network adapter. You're MTU should be set as high as your ISP can give you, and you can find your MTU by using the MTU/Latency tab. The PPPoE setting will not be explained in this guide due to the rarity of use. TCP Window Auto-Tuning should be set to ...

Promotion Lg 37le5300 Marantz Cd6003 Save You Money! Coupon Spicy Beef Jerky

Friday, December 2, 2011

Citrix NetScaler

!±8± Citrix NetScaler

Citrix NetScaler is a set of appliances that form a web application delivery solution, which has the capacity to speed-up application performance by a maxiumum of 5x. Citrix NetScaler also constantly decreases data center costs, and enhances the security of your web applications. Applications available range from,  the basic level, series 7000 and up to the most modern, MPX, which has the capabilities of  15 gigabits per second of throughput in both layers 4 and 7 with the highest constant use of all the functional modules. Perfect for network management for all enterprises that require acceleration in the running of existing web applications the Citrix Netscaler increases security in web applications and enhances the availability of the web application.

Citrix NetScaler is the most efficient and secure way to maximize the delivery of web application and is one of the few IT products on the market that still produces the most amazing performance that can give benefits to any organisation.

NetScaler merges the features and functions of conventional data centre point devices into one single network appliance, built from the ground up to optimize the performance of the following web applications aspects.:

Load balancing, Caching, Compression, SSL offload and acceleration, Attack defence (DoS etc.), SSL VPN

From resellers point of view, NetScaler is a very technical sale and the results speak for themselves. Getting the customer to evaluate the product in a live environment (or as close to it as feasibly possible) is key to the success of the opportunity. Over 80% of evaluations result in a purchase order.From an end user perspective, Citrix NetScaler products provide:

Maximized application performance End-to-end application security Continuous application availability Reduced cost of operations

Citrix NetScaler products enhance the performance of Oracle, PeopleSoft, SAP, Siebel, Outlook Web Access, e-commerce applications and custom applications 70% or more, while also enhancing security and intensively reducing operational expenditure.

Citrixt NetScaler utilises technology to maximize the TCP acceleration and accelerate the performance of web applications by up to five times. Citrix AppCompress and Citrix AppCache effectively transmit data compression and keep the static and dynamic contents on transitory storage (caching) to quicken the response from the web application. NetScaler TCP optimization provides network management solutions to settle the inconveniences of high latency, crammed network links and is transparent to the user and the server and only necessitates minimal of configuration or in some cases no configuration. To Real-Time Monitoring and identify all the Historical Page-Level end-users that are available on the Citrix EdgeSight for NetScaler. Citrix NetScaler allows for the reduction in the cost of delivering web applications by lowering the number of servers required, and maximise the use of network bandwidth available. MPX is the latest series of Citrix NetScaler, and delivers numerous 10GE ports and power to run all the modules with the traffic throughput of 10 gigabits even in the layer 7. This performance offers the capability to prevent the network segmentation that is not required that can decrease the number of switches and other elements, consequently there can be savings in the cost of infrastructure. With the AppExpert Visual Policy Builder there is no need for coding or script when making application delivery policies. Also, the current NetScaler can lower the cost of operations with some ability to consolidate in one solution. Using the Citrix EasyCall, which is part of the NetScaler package, will increase the employees productivity and reduced cost of telephone communication. to set some NetScaler Appliances, NetScaler Command Center is available separately that can provide centralized administration for the system configuration, event management, performance management, and administration of SSL certification. Citrix NetScaler delivers protection against assaults on the application layer, and assists in the prevention of the event of leakage or theft of customer data and companies data that are valuable. Citrix now have some capabilities such as open proxy for users who have permission, close proxy for users who do not have the permission with the Citrix Request Switching, which also is equipped with high-performance, and built-in defenses against denial of service attack (DoS). With configuration of the Surge Protection function and Priority Queuing, I is easier to manage the surges on the application's traffic that inflict a web server application.

Citrix NetScaler provides an obvious direction of the user requirements to make sure of the maximum distribution of traffic is achieved. If more information is required on the layer 4 - Protocol and Port Number or traffic management policies for TCP applications can all refer to the Application-Layer content. An administrator can carry out traffic segmentation on applications step by step to the information held in the TCP pay-load or the HTTP request body. Also in the header information L4-7 as the URL, type of data application or cookies. With the large number of algorithms for load-balancing and the facility to create health checks the server application availability by making sure that high-needs (requests) the user is directed only at the appropriate time depending on the behaviour of the server. While other solutions declare that they can enhance the performance of Web-enabled applications, only Citrix NetScaler products optimise the performance and security of Web-enabled applications. In addition, since NetScaler's single, integrated device takes the place of a number of point solutions, overall network infrastructures are considerably made easy, and total operational costs are significantly decreased.


Citrix NetScaler

Mens Long Sleeve Shirt Fast

Monday, November 28, 2011

State of CAD and Engineering Workstation Technologies

!±8± State of CAD and Engineering Workstation Technologies

Abbreviations

CAD is Computer Aided Design CAE is Computer Aided Engineering CEW is Computer aided design and Engineering Workstation CPU is Central Processing Unit GPU is Graphics Processing Unit

Hardware for CPU-Intensive Applications

Computer hardware is designed to support software applications and it is a common but simplistic view that higher spec hardware will enable all software applications to perform better. Up until recently, the CPU was indeed the only device for computation of software applications. Other processors embedded in a PC or workstation were dedicated to their parent devices such as a graphics adapter card for display, a TCP-offloading card for network interfacing, and a RAID algorithm chip for hard disk redundancy or capacity extension. However, the CPU is no longer the only processor for software computation. We will explain this in the next section.

Legacy software applications still depend on the CPU to do computation. That is, the common view is valid for software applications that have not taken advantage of other types of processors for computation. We have done some benchmarking and believe that applications like Maya 03 are CPU intensive.

For CPU-intensive applications to perform faster, the general rule is to have the highest CPU frequency, more CPU cores, more main memory, and perhaps ECC memory (see below).

Legacy software was not designed to be parallel processed. Therefore we shall check carefully with the software vendor on this issue before expecting multiple-core CPUs to produce higher performance. Irrespectively, we will achieve a higher output from executing multiple incidences of the same application but this is not the same as multi-threading of a single application.

ECC is Error Code Detection and Correction. A memory module transmits in words of 64 bits. ECC memory modules have incorporated electronic circuits to detect a single bit error and correct it, but are not able to rectify two bits of error happening in the same word. Non-ECC memory modules do not check at all - the system continues to work unless a bit error violates pre-defined rules for processing. How often do single bit errors occur nowadays? How damaging would a single bit error be? Let us see this quotation from Wikipedia in May 2011, "Recent tests give widely varying error rates with over 7 orders of magnitude difference, ranging from 10−10−10−17 errors/bit-hour, roughly one bit error per hour per gigabyte of memory to one bit error per century per gigabyte of memory."

Hardware for GPU-Intensive Applications

The GPU has now been developed to gain the prefix of GP for General Purpose. To be exact, GPGPU stands for General Purpose computation on Graphics Processing Units. A GPU has many cores that can be used to accelerate a wide range of applications. According to GPGPU.org, which is a central resource of GPGPU news and information, developers who port their applications to GPU often achieve speedups of orders of magnitude compared to optimized CPU implementations.

Many software applications have been updated to capitalize on the newfound potentials of GPU. CATIA 03, Ensight 04 and Solidworks 02 are examples of such applications. As a result, these applications are far more sensitive to GPU resources than CPU. That is, to run such applications optimally, we should invest in GPU rather than CPU for a CEW. According to its own website, the new Abaqus product suite from SIMULIA - a Dassault Systemes brand - leverages GPU to run CAE simulations twice as fast as traditional CPU.

Nvidia has released 6 member cards of the new Quadro Fermi family by April 2011, in ascending sequence of power and cost: 400, 600, 2000, 4000, 5000 and 6000. According to Nvidia, Fermi delivers up to 6 times the performance in tessellation of the previous family called Quadro FX. We shall equip our CEW with Fermi to achieve optimum price/performance combinations.

The potential contribution of the GPU to performance depends on another issue: CUDA compliance.

State of CUDA Developments

According to Wikipedia, CUDA (Compute Unified Device Architecture) is a parallel computing architecture developed by Nvidia. CUDA is the computing engine in Nvidia GPU accessible to software developers through variants of industry-standard programming languages. For example, programmers use C for CUDA (C with Nvidia extensions and certain restrictions) compiled through a PathScale Open64 C compiler to code algorithms for execution on the GPU. (The latest stable version is 3.2 released in September 2010 to software developers.)

The GPGPU website has a preview of an interview with John Humphrey of EM Photonics, a pioneer in GPU computing and developer of the CUDA-accelerated linear algebra library. Here is an extract of the preview: "CUDA allows for very direct expression of exactly how you want the GPU to perform a given unit of work. Ten years ago I was doing FPGA work, where the great promise was the automatic conversion of high level languages to hardware logic. Needless to say, the huge abstraction meant the result wasn't good."

Quadro Fermi family has implemented CUDA 2.1 whereas Quadro FX implemented CUDA 1.3. The newer version has provided features that are significantly richer. For example, Quadro FX did not support "floating point atomic additions on 32-bit words in shared memory" whereas Fermi does. Other notable improvements are:

Up to 512 CUDA cores and 3.0 billion transistors Nvidia Parallel DataCache technology Nvidia GigaThread engine ECC memory support Native support for Visual Studio

State of Computer Hardware Developments

Abbreviations

HDD is Hard Disk Drive SATA is Serial AT Attachment SAS is Serial Attached SCSI SSD is Solid State Disk RAID is Redundant Array of Inexpensive Disks NAND is memory based on "Not AND" gate algorithm

Bulk storage is an essential part of a CEW for processing in real time and archiving for later retrieval. Hard disks with SATA interface are getting bigger in storage size and cheaper in hardware cost over time, but not getting faster in performance or smaller in physical size. To get faster and smaller, we have to select hard disks with SAS interfaces, with a major compromise on storage size and hardware price.

RAID has been around for decades for providing redundancy, expanding the size of volume to well beyond the confines of one physical hard disk, and expediting the speed of sequential reading and writing, in particular random writing. We can deploy SAS RAID to address the large storage size issue but the hardware price will go up further.

SSD has turned up recently as a bright star on the horizon. It has not replaced HDD because of its high price, limitations of NAND memory for longevity, and immaturity of controller technology. However, it has found a place recently as a RAID Cache for two important benefits not achievable with other means. The first is a higher speed of random read. The second is a low cost point when used in conjunction with SATA HDD.

Intel has released Sandy Bridge CPU and chipsets that are stable and bug free since March 2011. System computation performance is over 20% higher than the previous generation called Westmere. The top CPU model has 4 editions that are officially capable of over-clocking to over 4GHz as long as the CPU power consumption is within the designed limit for thermal consideration, called TDP (Thermal Design Power). The 6-core edition with official over-clocking will come out in June 2011 timeframe.

CurrentState & Foreseeable Future

Semiconductor manufacturing technology has improved to 22 x 10-9 metres this year 2011and is heading towards 18 nanometres in 2012. Smaller means more: we will get more cores and more power from a new CPU or GPU made on advancing nanotechnology. The current laboratory probe limit is 10-18and this sets the headroom for semiconductor technologists.

While GPU and CUDA are having big impacts on performance computing, the dominant CPU manufacturers are not resting on their laurels. They have started to integrate their own GPU into the CPU. However, the level of integration is a far cry from the CUDA world and integrated GPU will not displace CUDA for design and engineering computing in the foreseeable future. This means our current practice as described above will remain the prevailing format for accelerating CAD, CAE and CEW.

END


State of CAD and Engineering Workstation Technologies

Electrolux St Cloud Discount

Tuesday, November 22, 2011

10 Gigabit Ethernet is Ready For Your Cluster

!±8± 10 Gigabit Ethernet is Ready For Your Cluster

Say "cluster" and try to keep your mind from images of massive, government-funded scientific applications or herds of caffeine-fueled grad students. Pretty tough. But in fact, the vast majority of high performance computing (HPC) clusters are nowhere near large enough to qualify as massive, are used in commercial environments, and run on Gigabit Ethernet interconnects. Even within the TOP500® Supercomputer Sites the number of clusters running Gigabit Ethernet is more than double the number of clusters running InfiniBand. Certainly, higher speed and lower latency would be nice for any installation. But the performance requirements for most applications just don't merit the high cost and labor-intensive maintenance of InfiniBand.

What most Gigabit Ethernet HPC sites could really use is an upgrade to 10 Gigabit Ethernet (10GE)-if it could be done cost-effectively and reliably. Until now, that idea would generate hesitation and skepticism among knowledgeable decision-makers. But with Gigabit Ethernet already entrenched in the HPC market and providing a slew of advantages, only a few obstacles have prevented the widespread growth of 10GE. Those obstacles are quickly evaporating. With recent technology advances, pricing improvements, and proven vendors entering the market, the choice of 10GE for HPC clusters has become quite attractive.

Understanding 10GE
Understanding the environment for 10GE merits a little history. Although Ethernet has been around for three decades, the technology remains viable because it has evolved over time to meet changing industry requirements. Widespread Ethernet adoption began when the IEEE established the 10 Mbps Ethernet standard in 1983. That standard evolved to Fast Ethernet (100 Mbps), Gigabit Ethernet (1000 Mbps), and 10 Gigabit Ethernet, with 40 and 100 Gigabit standards coming soon. In fact, discussions have started about Terabit Ethernet-a million Mbps-a speed that was hard to imagine just a few years ago.

Despite this evolution, the basic Ethernet frame format and principles of operation have remained virtually unchanged. As a result, networks of mixed speeds (10/100/1000 Mbps) operate uniformly without the need for expensive or complex gateways. When Ethernet was first deployed it could easily be confused with true plumbing-it was coaxial tubing which required special tools even to bend it. As Ethernet evolved it absorbed advancements in cabling and optics, changed from shared to switched media, introduced the concept of virtualization via VLANs, and incorporated Jumbo Frames and many other improvements. Today Ethernet continues to evolve with sweeping changes such as support for block-level storage (Fibre Channel over Ethernet).

Ratified in 2002 as IEEE 802.3ae, today's 10GE supports 10 Gigabits per second transmission over distances up to 80 km. In almost every respect, 10GE is fully compatible with previous versions of Ethernet. It uses the same frame format, Media Access Control (MAC) protocol, and frame size, and network managers can use familiar management tools and operational procedures.

Ethernet Advantages for HPC
The fact that more than half of the TOP500 Supercomputer Sites and almost all smaller clusters run Ethernet is no surprise when you look at the benefits this technology offers:
o High Comfort Level: As a widely-used standard, Ethernet is a known environment for IT executives, network administrators, server vendors, and managed service providers around the world. They have the tools to manage it and the knowledge to maintain it. Broad vendor support is also a plus-almost all vendors support Ethernet.

o Best Practices: High availability, failover, management, security, backup networks, and other best practices are well-established in Ethernet and their implementation is widely understood. This is another example of the wide acceptance and vendor support for Ethernet. (Good luck finding an InfiniBand firewall, for example!)

o Single Infrastructure: Ethernet gives HPC administrators the advantage of a single infrastructure that supports the four major connectivity requirements: user access, server management, storage connectivity, and cluster interconnect. A single infrastructure is easier to manage and less expensive to purchase, power, and maintain than using a separate technology for storage or for the processor interconnect.

o Lower Power Requirements: Power is one of the biggest expenses facing data center managers today. New environmental mandates combined with rising energy costs and demand are forcing administrators to focus on Green initiatives. Ethernet is an efficient option for power and cooling, especially when used in designs that reduce power consumption.

o Lower cost: With new servers shipping 10G ports on the motherboard and 10G switch ports now priced below 0, 10GE has a compelling price/performance advantage over niche technologies such as InfiniBand.

o Growth Path: Higher-speed Ethernet will capitalize on the large installed base of Gigabit Ethernet. New 40GE and 100GE products will become available soon, and will be supported by many silicon and equipment vendors.

For those applications that could benefit from higher speeds, 10GE offers even more benefits.
o More Efficient Power Utilization: 10GE requires less power per gigabit than Gigabit Ethernet, so you get ten times the bandwidth without ten times the power.

o Practical Performance: 10GE can obviously move data 10 times faster than Gigabit Ethernet, but due to the new generation of 10GE NICs it also can reduce latency between servers by about 8 times.

This bandwidth and latency gain translates into higher application performance than you might imagine. For molecular dynamics (VASP running on a 64 core cluster) the application ran more than six times faster than Gigabit Ethernet and was nearly identical to InfiniBand DDR. In a mechanical simulation benchmark (PAM CRASH running on a 64 compute core cluster), 10GE completed tasks in about 70 percent less time than Gigabit Ethernet, and was equal to InfiniBand DDR. Similar results have been observed on common HPC cluster applications such as FLUENT and RADIOSS, and more test results are coming in with similar results.

These benchmarks are impressive. Vendors love talking about microseconds and gigabits per second. But the real advantage in commercial applications is the increase in user productivity, and that's measured by the clock on the wall. If computations run 70 percent faster, users can be 70 percent more productive.
The advantages of 10GE have many cluster architects practically salivating at the prospect of upgrading to 10GE, and experts have been predicting rapid growth in the 10GE for cluster market for years. That hasn't happened-yet.

Obstacles Eradicated
Until recently, 10GE was stuck in the starting gate because of a few-but arguably significant-problems involving pricing, stability, and standards. Those problems have now been overcome, and 10GE has taken off. Here's what happened.

o Network Interface Cards (NICs): Some early adopters of 10GE were discouraged by problems with the NICs, starting with the price. Until recently, the only NICs available for 10GE applications cost about 0 and many users prefer to use two of them per server. Now server vendors are starting to add an Ethernet chip to the motherboard-known as LAN-on-Motherboard (LOM)-instead of using a separate board. This advance drops the cost to well under 0 and removes the NIC price obstacle from 10GE. Standalone NIC prices are now as low as 0 and will continue to drop as LOM technology lets NIC vendors reach the high volumes they need to keep costs down.

Another NIC-related obstacle was the questionable reliability of some of the offerings. A few of these created a bad initial impression of 10GE, with immature software drivers that were prone to underperforming or even crashing. The industry has now grown past those problems, and strong players such as Chelsio, Intel and Broadcom are providing stable, reliable products.

o Switch Prices: Like NICs, initial 10GE switch prices inhibited early adoption of the technology. The original 10GE switches cost as much as ,000 per port, which was more than the price of a server. Now list prices for 10GE switches are lower than 0 per port, and street prices are even lower. And that pricing is available for embedded blade switches as well as the top of rack products.

o Switch Scaling: A market inhibitor for large clusters was how to hook switches together to create a nonblocking cluster. Most clusters are small enough that this is not an issue. For larger clusters, CLOS technology for scaling Ethernet switches provides a solution, and is starting to become established in the market.

o PHY Confusion: Rapid evolution of the different fiber optic transceiver standards was a stopper for customers. Standards defining the plug-in transceiver quickly changed from XENPAK to X2 to XFP to SFP+, with each bringing smaller size and lower cost. But because each type of transceiver has a different size and shape, a switch or NIC is only compatible with one option. Using multiple types of optics would increase data center complexity and add costs such as stockpiling additional spares. With visions of Blue-ray versus HD-DVD, VHS versus Betamax, and MS-DOS versus CP/M, users were unwilling to bet on a survivor and shunned the technology as they waited to see which way the market would move.

Eventually, the evolution culminated in SFP+. This technology is specified by the ANSI T11 Group for 8.5- and 10-Gbps Fibre Channel, as well as 10GE. The SFP+ module is small enough to fit 48 in a single rack-unit switch, just like the RHJ-45 connectors used in previous Ethernet generations. It also houses fewer electronics, thereby reducing the power and cost per port. SFP+ has been a boon to the 10GE industry, allowing switch vendors to pack more ports into smaller form factors, and lowering system costs through better integration of IC functions at the host card level. As a result, fewer sparks are flying in the format wars, and the industry is seeing a very rapid convergence onto SFP+.

o Cabling: Many users have been holding out for 10GBase-T because it uses a common RJ45 connector and can give the market what it's waiting for: simple, inexpensive 10GE. But the physics are different at 10GE. With current technology, the chips are expensive, power hungry, and require new cabling (Cat6A or Cat 7). 10GBase-T components also add 2.6 microseconds latency across each cable-exactly what you don't want in a cluster interconnect. And as we wait for 10GBase-T, less expensive and less power-hungry technologies are being developed. 10GBASE-CX4 offers reliability and low latency, and is a proven solution that has become a mainstay technology for 10GE.

Making the wait easier is new SFP+ Copper (Twinax) Direct Attach cables, which are thin, passive cables with SFP+ ends. With support for distances up to 10 meters, they are actually ideal for wiring inside a rack or between servers and switches that are in close proximity. At an initial cost of to and with an outlook for much lower pricing, Twinax provides a simpler and less expensive alternative to optical cables. With advances such as these, clarity is overcoming confusion in the market. The combination of SFP+ Direct Attach cables for short distances, familiar optical transceivers for longer runs, and 10GBASE-CX4 for the lowest latency, there are great choices today for wiring clusters.

When the Cluster Gets Larger
Until this point we've talked about how the barriers to 10GE adoption have been overcome for the many HPC clusters that use Gigabit Ethernet. Now let's look at the possibility of bringing the benefits of 10GE to much larger clusters with more demanding requirements. Those implementations require an interconnect that provides sufficient application performance, and a system environment that can support the rigorous hardware challenges of multiple processors such as heat dissipation and cost-effective power use.
Examining the performance question reveals that some HPC applications that are loosely coupled or don't have an excessive demand for low latency can run perfectly well over 10GE. Many TCP/IP-based applications fall into this category, and many more can be supported by adapters that offload TCP/IP processing. In fact, some TCP/IP applications actually run faster and with lower latency over 10GE than over InfiniBand.

For more performance-hungry and latency-sensitive applications, the performance potential of 10GE is comparable to current developments in InfiniBand technology. InfiniBand vendors are starting to ship 40 Gig InfiniBand (QDR), but let's look at what that really delivers. Since all InfiniBand uses 8b/10b encoding, take 20 percent off the advertised bandwidth right away-40 Gig InfiniBand is really 32 Gig, and 20 Gig InfiniBand is really only capable of 16 Gig speeds. But the real limitation is the PCIe bus inside the server-typically capable of only 13 Gigs for most servers shipped in 2008. Newer servers may use "PCIe Gen 2" to get to 26 Gigs, but soon we will begin to see 40 Gigabit Ethernet NICs on faster internal buses, and then the volumes will increase and the prices will drop. We've seen this movie before-niche technologies are overtaken by the momentum and mass vendor adoption of Ethernet.

In addition, just as Fast Ethernet switches have Gigabit uplinks, and Gigabit switches have 10 GE uplinks, it won't be long before 10 Gigabit switches have 40 and 100 Gigabit links to upstream switches and routers. And you won't need a complex and performance-limiting gateway to connect to resources across the LAN or the wide area network. At some point, 10, 40, and 100 Gigabit Ethernet will be the right choice for even the largest clusters.

What's Important: Application Performance
One Reuters Market Data System (RMDS) benchmark (stacresearch.com) that compared InfiniBand with a BLADE Network Systems 10GE solution showed that 10GE outperformed InfiniBand, with significantly higher updates per second and 31 percent lower latency (see Figure 1 and Figure 2). These numbers demonstrate the practical benefits of 10GE far more conclusively than the micro-benchmarks of the individual components.

Practical Considerations
Switches can come in many sizes and shapes, and new, more efficient form factors are emerging. Blade servers can be used to create an efficient and powerful solution suitable for clusters of any size, with the switching and first level of interconnection entirely within the blade server chassis. Connecting server blades internally at either 1 or 10 Gigabits greatly reduces cabling requirements and generates corresponding improvements in reliability, cost, and power. Since blade servers appeared on the scene a few years ago, they have been used to create some of the world's biggest clusters. Blade servers are also frequently used to create compact departmental clusters, often dedicated to performing a single critical application.

One solution designed specifically to support the power and cooling requirements for large clusters is the IBM® System x(TM) iDataPlex(TM). This new system design is based on industry-standard components that support open source software such as Linux®. IBM developed this system to extend its proven modular and cluster systems product portfolio for the HPC and Web 2.0 community.

The system is designed specifically for power-dense computing applications where cooling is critical. An iDataPlex rack has the same footprint as a standard rack, but has much higher cooling efficiency because of its reduced fan air depth. An optional liquid cooled wall on the back of the system eliminates the need for special air conditioning. 10GE switches from BLADE Network Technologies match the iDataPlex specialized airflow, which in turn matches data centers' hot and cold aisles and creates an integrated solution that can support very large clusters.

Blade servers and scale-out solutions like iDataPlex are just two of the emerging trends in data center switching that can make cluster architectures more efficient.

A Clear Path
The last hurdles to 10GE for HPC have been cleared:
o NIC technology is stable and prices are continuing to drop while latency and throughput continue to improve, thanks to improved silicon and LAN-on-Motherboard (LOM) technology.

o 10GE switches are now cost-effective at under 0 per port.

o The combination of SFP+ Direct Attach cabling, SFP+ optics, and 10GBASE-CX4 provides a practical and cost-effective wiring solution.

o New platforms are being introduced with power efficiency and cooling advances that can meet demanding HPC requirements, even for large clusters.

o New benchmarks are proving that 10GE can provide real business benefits in faster job execution, while maintaining the ease-of-use of Ethernet.

o Blade server technology can support 10GE while meeting the demanding physical requirements of large clusters.

With Gigabit Ethernet the de-facto standard for all but the largest cluster applications and the last hurdles to 10GE for HPC cleared, it's time to re-create the image of the HPC network: standards-based components, widely-available expertise, compatibility, high reliability, and cost-effective technology.


10 Gigabit Ethernet is Ready For Your Cluster

!8!# Munro Walking Wedge Coupon Obagi Clear Compare

Friday, November 11, 2011

Storage Performance With iSCSI Technology: Storage Solution With iSCSI Offload

!±8± Storage Performance With iSCSI Technology: Storage Solution With iSCSI Offload

A very common misperception is that iSCSI SANs are not as secure as Fibre Channel SANs. In fact, when logically or physically separated, iSCSI networks are just as secure as Fibre Channel.

iSCSI is sending SCSI commands in IP packets. To be more specific, iSCSI is designed to be a protocol for a storage initiator (usually a server) to send SCSI commands to a storage target (usually a tape or disk) over IP. iSCSI technology provides compelling price/performance in a simplified architecture while improving manageability in virtualized environments.

SCSI protocol is not designed to handle lost frames, or blocks. It expects them to arrive, no matter if they're damaged (have CRC errors). IP or Ethernet will not deliver a frame which has CRC errors; behavior which is not compatible with what SCSI expects. This is why iSCSI uses TCP.

The way to improve performance of iSCSI is to use HBAs (Ethernet NIC) with TCP offload, which do all the necessary calculations on an embedded ASIC, significantly reducing the processing latency. Majority of storage arrays with iSCSI already make use of TCP offload, so some of the performance problems that you're seeing are quite likely caused by a poor choice of Ethernet NIC in the server (or incorrect/badly configured NIC drivers). Broadcom NetXtreme II network adapters can be used for Iscsi Offload.

Specifically, iSCSI offers virtualized environments simplified deployment, comprehensive storage management and data protection functionality, and seamless VM mobility. Dell iSCSI solutions give customers the "Storage Direct" advantage - the ability to seamlessly integrate virtualization into an overall, optimized storage environment.

In Linux you can use iscsiadm command to login the ISCSI targets and allows to discover.In CentOS 5.1 adds over the 5.0 version is an updated iscsi-initiator-utils. It adds the "iface" context, which allows you to configure more than one physical interface for connection to iSCSI targets. With CentOS 5.0 the best alternative for redundant paths was to use the Linux bonding driver and create an active-standby bond between two interfaces. This is great for redundancy, but doesn't provide and load balancing.

Dell's PS Series group provides iSCSI-accessible block storage. With the latest version of the PS Series firmware and the addition of an EqualLogic FS7500, the same PS Series group can provide block storage and support for NAS (Network Attached Storage). FS7500 is integrated directly into the EqualLogic Group Manager to improve productivity, making it easy to configure and manage iSCSI, CIFS and NFS storage.


Storage Performance With iSCSI Technology: Storage Solution With iSCSI Offload

!8!# Shopping Insinkerator Food Waste Disposer

Sunday, November 6, 2011

F5 Tech Demo-BIG-IP Edge Gateway Client Traffic Shaping

In this Technology Demonstration, Peter Silva sits down with Jose Gonzalez, Sr. Product Development Manger, to show the Client Side Traffic Shaping feature on the BIG-IP Edge Gateway, part of the overall BIG-IP v10.1 release. First see how UDP traffic can overtake SMB traffic, then how to configure Traffic Shaping on the BIG-IP, and finally how SMB and UDP can co-exist together over the tunnel.

Cheep Bridgestone Atv Tires !8!# Fire Rated Gun Safe Immediately


Twitter Facebook Flickr RSS



Fran�ais Deutsch Italiano Portugu�s
Espa�ol ??? ??? ?????







Sponsor Links