Segment Server Specifications: Click here
Quick facts about Greenplum Database Segment Servers
1. Segments are where data is stored and the majority of query processing takes place.
2. When a user connects to the database and issues a query, processes are created on each segment to handle the work of that query.
3. User-defined tables and their indexes are distributed across the available segments in a Greenplum Database system; each segment contains a distinct portion of data.
4.The database server processes that serve segment data run under the corresponding segment instances. Users interact with segments in a Greenplum Database system through the master.
4. In the recommended Greenplum Database hardware configuration, there is one active segment per effective CPU or CPU core. For example, if your segment hosts have two dual-core processors, you would have four primary segments per host.
5. The segments communicate with each other and with the master over the interconnect, which is the networking layer of Greenplum Database.
6. The Greenplum primary and mirror segments are configured to use different interconnect switches in order to provide redundancy in the event of a single switch failure.
7. Greenplum Database provides data redundancy by deploying mirror segments. Mirror segments allow database queries to fail over to a backup segment if the primary segment becomes unavailable.
8. A mirror segment always resides on a different host than its corresponding primary segment.
9. A Greenplum Database system can remain operational if a segment host, network interface or interconnect switch goes down as long as all portions of data are available on the remaining active segments.
10. During database operations, only the primary segment is active.
11. Changes to a primary segment are copied over to its mirror using a file block replication process. Until a failure occurs on the primary segment, there is no live segment instance running on the mirror host -- only the replication process.
12. In the event of a segment failure, the file replication process is stopped and the mirror segment is automatically brought up as the active segment instance. All database operations then continue using the mirror. While the mirror is active, it is also logging all transactional changes made to the database. When the failed segment is ready to be brought back online, administrators initiate a recovery process to bring it back into operation.
Each segment instance have their own postgresql.conf file. Some parameters are local. each segment instance examines its own postgresql.conf file to
get the value of that parameter.
To change a local configuration parameter across multiple segments, update the parameter in the postgresql.conf file of each targeted segment, both primary and
mirror. Use the gpconfig utility to set a parameter in all Greenplum postgresql.conf files. For example:
$gpconfig -c gp_vmem_protect_limit -v 4096MB