DCA architecture
DCA configuration rules
There are three basic types of racks that can be shipped from Manufacturing: System -
DCA1-SYSRACK, Aggregation - DCA1-AGGREG and Expansion -
DCA1-EXPAND. The following rules apply for population of racks:
1. A configuration of two racks or smaller will use one System and one Aggregation rack.
2. A configuration of one to six racks will use a System and Aggregation for the first two racks and Expansion for each remaining rack.
3. A configuration of one to twelve racks will use a System as the first rack, aggregation as the second rack, and expansion for the remaining racks.
4. The largest configuration allowed is 12 racks.
5. The smallest configuration allowed is 1/4 rack
6. Each rack can contain 4, 8, 12 or 16 servers.
7. Racks must be fully populated with 16 servers before an additional rack is added.
The Greenplum DCA is built from four server increments called modules. The DCA starts as a single Greenplum Database module and can be configured with a
maximum of 24 modules, up to six racks. There are three hardware rack configurations
DCA components, quantities, and rack locations
===============================================================================
Master Host Interconnect Switch Administration Switch
===============================================================================
System Rack 2 (one primary and one standby) 2 1
Aggregation Rack 4 4 2
Expansion Rack 2 2 1
===============================================================================
Greenplum DCA has two Master Hosts (the primary master and a standby master). The Master Hosts are located in the system rack only.
Master Host specifications
Hardware Specifications Quantity
======================================================================================================
Processor Intel X5670 2.93 GHz (6-core) 2
Memory DDR3 1333 MHz 48 GB
Dual-port converged network adapter 2 x 10 Gb/s 1
Quad-port network adapter 4 x 1 Gb/s 1
RAID controller Dual-channel 6 Gb/s SAS 1
Hard disks 600 GB 10k rpm SAS (one RAID 5 volume of 4+1 with one hot spare) 6
======================================================================================================
Segment Host hardware specifications
======================================================================================================
Hardware Specifications Quantity
======================================================================================================
Processor Intel X5670 2.93 GHz (6-core) 2
Memory DDR3 1333 MHz 48 GB
Dual-port converged network adapter 2 x 10 Gb/s 1
Dual-port network adapter 2 x 1 Gb/s 1
RAID controller Dual channel 6 Gb/s SAS 1
Hard disks Standard GPDB system: 600 GB 15k rpm SAS 12
(two RAID5 volumes of 5+1 disks)
High Capacity system, DIA: 2 TB 7.2k rpm SATA
(two RAID5 volumes of 5+1 disks)
======================================================================================================
Segment Host software specifications
======================================================================================================
Software Version
======================================================================================================
Red Hat Enterprise Linux 5 5.5
Greenplum Database 4.1.1.3
======================================================================================================
Network component specifications
======================================================================================================
Hardware Specifications Quantity
======================================================================================================
Interconnect Switch 24-port Converged Enhanced 10 GB Ethernet (CEE), 2
Fibre Channel over Ethernet (FCoE),
8 Fibre Channel Ports (SAN Mirror)
Administrative Switch 24-port 1 Gb Ethernet Layer 3 1
======================================================================================================
Interconnect Bus
The Interconnect Bus provides a high-speed network that enables high-speed communication between the Master Servers and the Segment Servers, and between
the Segment Servers themselves. In this context, the term interconnect refers to the inter-process communication between the segments as well as the network
infrastructure on which this communication relies. The Interconnect Bus represents a private network and must not be connected to public or customer networks. The Interconnect Bus accommodates high-speed connectivity to ETL servers or environments and provides direct access to a Data Domain storage unit for efficient data protection.
If you plan to implement the Data Domain Backup and Recovery solution, reserve a port on each switch of the Interconnect Bus for Data Domain connectivity.
Technical specifications
The Interconnect Bus consists of two CEE/ FCoE, layer-2, 32-port switches.
Each switch includes:
• 24 x 10 GbE CEE ports
• 8 x 8 Gb Fibre Channel (FC) ports
To maximize throughput, interconnect activity is load-balanced over two interconnect networks. To ensure redundancy, a primary segment and its corresponding mirror
segment use different interconnect networks. With this configuration, the DCA can continue operations in the event of a single Interconnect Bus failure.
User Datagram Protocol
The Interconnect Bus uses the User Datagram Protocol (UDP) to send messages over the network. Greenplum Database performs the additional packet verification and any checking not performed by UDP. In this way, the reliability is equivalent to Transmission Control Protocol (TCP), but the performance and scalability exceed TCP.
Administration switch
Each rack in a DCA includes a 24-port, 1 Gb Ethernet layer-2 switch for use as an administration network by EMC Customer Support. This administration network is
implemented to provide secure and reliable access to server and switch consoles and to prevent administration activity from becoming a bottleneck on the customer LAN or on the interconnect networks.
The management interfaces of all of the servers and switches in a DCA are connected to the administration network. In addition, the primary Master Server and the standby Master Server are connected to the administration network over separate network interfaces. As a result, access to all of the management interfaces of the DCA is available from the command line of either Master Server.
DCA configuration rules - Dec 13, 2013 1:49:52 PM