This guide has been released to the DRBD community, and its authors The DRBD User’s Guide v. Pacemaker CRM configuration. This document will cover the basics of setting up a third node on a standard Debian Etch installation. At the end of this tutorial you will have a DRBD device that. There may be multiple resource sections in a single file. For more examples, please have a look at the DRBD User’s Guide.
|Published (Last):||11 October 2013|
|PDF File Size:||12.37 Mb|
|ePub File Size:||10.11 Mb|
|Price:||Free* [*Free Regsitration Required]|
Scroll to navigation DRBD. Comments out chunks of text, even spanning more than one line. Everything enclosed by the braces is skipped. Configures some global parameters.
Currently only minor-countdialog-refreshdisable-ip-verification and usage-count are allowed here. You may only have one global section, preferably as the first section.
All resources inherit the options set in this section. The common section might have a startupa syncera handlersa net and a disk section. Configures a DRBD resource. Each resource section needs to have two or more on host sections and may have a startupa syncera handlersa net and a disk section. Required parameter in this section: Carries the necessary configuration parameters for a DRBD device of the enclosing resource. You may list more than one host name here, in case you want to use the same parameters on several hosts you’d have to move the IP around usually.
Or you may list more than two such sections. For a stacked DRBD setup 3 or 4 nodesa stacked-on-top-of is used instead of an on section. Required parameters in this section: This section is very similar to the on section. The difference to the on section is that the matching of the host sections to machines is done by the IP-address instead of the node name. This section is used to fine tune DRBD’s properties in respect to the low level storage.
Please refer to drbdsetup 8 for detailed description of the parameters. This section is used to fine tune DRBD’s properties. Please refer to drbdsetup 8 for a detailed description of this section’s parameters. This section is used to fine tune the synchronization daemon for the device. In this section you can define handlers executables that are started by the DRBD system in response to certain events. You can disable the IP verification with this option.
(5) — drbd-utils — Debian jessie-backports — Debian Manpages
Please participate in DRBD’s online usage counter . The most convenient way to do so is to set this option to yes. Valid protocol specifiers are A, B, and C. The name of the block device node of the resource being described.
You must use this device with your application file system and you must not use the low level block device which is specified with the disk parameter. DRBD uses this block device to actually store and retrieve the data. Never access such a device while DRBD is running on top of it.
This also holds true for dumpe2fs 8 and similar commands. A resource needs one IP address per device, which is used to wait for guidw connections from the partner device respectively to reach the partner device. AF must be one of ipv4ipv6ssocks or sdp for compatibility reasons sci is an alias for ssocks. It may be omited for IPv4 addresses. The actual IPv6 address that follows the ipv6 keyword must be uset inside brackets: Internal means that the drhd part of the backing device is used to store the meta-data.
You must not use [index] with internal. Regardless of whether you use the meta-disk or the flexible-meta-disk keyword, it will always be of the size needed for the remaining storage size. By fencing we understand preventive measures to avoid situations where both nodes are primary and disconnected AKA split brain. If a node becomes a disconnected primary, it tries to fence the peer’s disk.
This is done by calling the fence-peer handler. The handler is supposed to reach the other node over alternative communication paths and call ‘ drbdadm outdate res ‘ there. If use node becomes a disconnected primary, it freezes all its IO operations and calls its fence-peer handler.
The fence-peer handler is supposed to reach the peer over alternative drrbd paths and call ‘drbdadm outdate res’ there.
In case it cannot reach the peer it should stonith the peer. IO is resumed as soon as the situation is resolved. In case your handler fails, you can resume IO with the resume-io command. At the time of writing the only known drivers usrr have such a function are: DRBD has four implementations to express write-after-write dependencies to its backing storage device.
DRBD will use the first method that is supported by the backing storage device and that is not disabled by the user. The first requires that the driver of the backing storage device support barriers called ‘tagged command queuing’ in SCSI and ‘native command queuing’ in SATA speak.
The use of this method can be disabled by the no-disk-barrier option. The second requires that the backing device support disk flushes called ‘force unit access’ in the drive vendors speak.
The use of this method can be disabled using the no-disk-flushes option. The third method is simply to let write requests drain before write requests of a new reordering domain are issued. This was the only implementation before 8. The fourth method is to not express write-after-write dependencies to the backing store at all, by also specifying no-disk-drain.
Do not use no-disk-drain. Disables the use of disk flushes and barrier BIOs when accessing the meta data device. See the notes on no-disk-flushes. A known example is: Then you might see “bio would need to, but cannot, be split: The disk state advances to diskless, as soon as the backing block device has finished all IO requests.
The default value is 0, i. You can specify smaller or larger values. Larger values are appropriate for reasonable write throughput with protocol A over high latency networks. Values below 32K do not make sense. Usually this should be left at its default. Setting the size value to 0 means that the kernel should autotune this. This must be lower than connect-int and ping-int. With this option you can set the time between two retries. The default value is 10 seconds, the unit is 1 second.
The default is 10 seconds, the unit is 1 second. The time the peer has time to answer to a keep-alive packet. In case the peer’s reply is not received within this time period, it is considered as dead. The default value is ms, the default unit are tenths of a second. Limits the memory usage per DRBD minor device on the receiving side, or for internal buffers during resync or online-verify.
To avoid possible distributed deadlocks on congestion, this setting is used as a throttle threshold rather than a hard limit. Once more than max-buffers pages are in use, further allocation from this pool is throttled. You want to increase max-buffers if you cannot saturate the IO backend on the receiving side. In case the secondary node fails to complete a single write request for count times the timeoutit is expelled from the cluster. To disable this feature, you should explicitly set it to 0; defaults may change between versions.
The highest number of data blocks between two write barriers. If you set this smaller than 10, you might decrease your performance. With this option set you may assign the primary role to both nodes. You only should use this option if you use a shared storage file system on top of DRBD. At the time of writing the only ones are: If you use this option with any other file system, you are going to crash your nodes and to corrupt your data!
This setting has no effect with recent kernels that use explicit on-stack plugging upstream Linux kernel 2.
You need to specify the HMAC algorithm to enable peer authentication at all. You are strongly encouraged to use peer authentication. The HMAC algorithm will be used for the challenge response authentication of the peer. The shared secret used in peer authentication.
May be up to 64 characters. Note that peer authentication is disabled as long as no cram-hmac-alg see above is specified. Auto sync from the node that was primary before the split-brain situation happened.
Auto sync from the node that became primary as second during the split-brain situation.