Terminal Appearance in Mac OS X

The standard terminal appearance is just boring old black text on a white background. Apple included a few nice preset themes too, but to really make your terminals appearance stand out you’ll want to take the time to customize it yourself. While some of these tweaks are admittedly pure eye candy, others genuinely improve the command line experience and make using the terminal not only more attractive but easier to scan.

Improve the Terminal appearance in Mac OS X

Follow along and try them all, or just pick and choose which makes the most sense for you.

Modify Bash Prompt, Enable Colors, Improve ‘ls’

At a bare minimum, let’s get a better bash prompt, improve the output of the frequently used ls command, and enable colors. This is all done by editing the .bash_profile or .bashrc located in the home directory, for the purpose of this walkthrough we’ll use .bash_profile:

    • Open Terminal and type nano .bash_profile
    • Paste in the following lines:

export PS1="\[\033[36m\]\u\[\033[m\]@\[\033[32m\]\h:\[\033[33;1m\]\w\[\033[m\]\$ "
export CLICOLOR=1
export LSCOLORS=ExFxBxDxCxegedabagacad
alias ls='ls -GFh'

  • Hit Control+O to save, then Control+X to exit out of nano

The first line changes the bash prompt to be colorized, and rearranges the prompt to be “username@hostname:cwd $”

The next two lines enable command line colors, and define colors for the ‘ls’ command

Finally, we alias ls to include a few flags by default. -G colorizes output, -h makes sizes human readable, and -F throws a / after a directory, * after an executable, and a @ after a symlink, making it easier to quickly identify things in directory listings.

Pasted in properly, it should look like this:

Improve the Terminal appearance

Open a new terminal window, run ls, and see the difference. Still not satisfied with the appearance, or have you already done that? There’s more to do.

Enable Bold Fonts, ANSI Colors, & Bright Colors

This will be theme and profile dependent, meaning you will have to adjust this for each theme. Most themes have ANSI color on by default, but enable it if it’s not.

  • Pull down the Terminal menu and choose “Preferences”, then click the “Settings” tab
  • Choose your profile/theme from the left side list, then under the “Text” tab check the boxes for “Use bold fonts” and “Use bright colors for bold text”

Enable bold fonts and bright colors in Terminal

This makes things like directories and executables be bold and brighter, making them easier to spot in listings.

Consider Customizing ANSI Colors

Going further with ANSI colors, if you discover that certain text contrast or text colors are hard to read with a specific profile or against a specific background color in Terminal, you may want to manually adjust the ANSI colors used by Terminal app, this is done through the Preferences > Profiles > Text section:

Change ANSI colors in Terminal

Generally it’s best to adjust ANSI colors to be near their intended color mark but in the realm of being easier to read, a shade of grey to replace black for example.

Adjust Background Opacity, Blur, & Background Image

After you have colorization squared away, adjusting the terminals background appearance is a nice touch:

  • Back in Terminal Preferences, choose the theme from the left side, then go to the “Window” tab
  • Click on “Color & Effects” to adjust the background color, opacity, and blur – opacity at 80% or so and blur at 100% is pleasant on the eyes
  • Click on “Image” to select a background picture. Dark background pictures are better for dark themes, light for light, etc

Adjust the Terminals background and appearance

Opacity and blur alone tend to be enough, but going the extra step to set a background picture can look either really nice or completely garish. You make the call.

Terminal window with background image in Mac OS X

Install a Theme

Another approach is to use Terminal themes like IR Black, which are simple to install, add custom colors, and make the command line much more attractive. Here are three popular themes:

You can also easily create your own by spending some time with Terminal Preferences and setting colors and fonts to what you like.

New Terminal vs Old Terminal

Put it all together, and you should have something like this:

Better looking terminal in Mac OS X

Which is a bit more interesting to look at than this, right?

Command prompt

Have a useful bash prompt or some other customization tip? Let us know in the comments.

Connect CentOS to MySQL ODBC

 

Connect CentOS to MySQL ODBC

There are many uses for this, usually reporting, or some other data manipulation. Let’s get started, login and su to root

install some basic tools
#yum install unixODBC mysql-connector-odbc

view the basics of your config in a nice info file

view the ODBC config instance file, this is for different database types

as you can see MySQL and PostgreSQL are allready configured

from here we need to create specific instances/connections.

here’s a look at a sample ODBC connection config

save + quit

from here install the ODBC driver, and install SystemDSN

test the connection using the Database name {space} username

as we can see basic SQL Queries work

that’s it!

Unable to find ODBC Client Interface (libodbc.so.1).
ln -s /usr/lib64/libodbc.so.2 /usr/lib64/libodbc.so.1

ref: http://blog.zwiegnet.com/linux-server/connect-centos-to-mysql-odbc/

THINKPAD X220 OS X 10.11 EL CAPITAN INSTALLATION GUIDE

Follow these steps to perform a clean installation of OS X 10.11 El Capitan on your ThinkPad X220 or X220 Tablet.

A guide to install macOS Sierra Public Beta 10.12 on the X220 is available here.

  • A brief video demonstrating OS X 10.11 El Capitan running on the X220 can be viewed here.
  • If you already have OS X 10.10 Yosemite on your X220 we recommend doing a clean installation of El Capitan, not an update from the existing installation.
  • See the guide here to install OS X 10.10 Yosemite on the ThinkPad X220.
  • See the guide here to dual-boot OS X and Windows.
  • Please use the contact form below if you find anything that isn’t accurate.

PART 1 of 4: ADJUST BIOS SETTINGS

  1. Update the BIOS with the official Lenovo BIOS version 1.42 available here: Windows 1.42 Update Utility orBootable 1.42 CD Image
  2. Install the modified BIOS version 1.42 to remove the whitelist check and permit the installation of an OS X compatible wifi card. This modified BIOS will also enable advanced settings and improve battery life under OS X.
    MD5: 282fa6399d0e96f9752ff949ed90adca
    – Stock wifi cards in the X220 are not compatible with OS X
    – Recommended half-height Mini PCIe wifi cards that require no configuration whatsoever and will work automatically in OS X: Dell DW1515 and Dell DW1510 (optional steps to rebrand a DW1510 as Apple AirPort Extreme available here)
    – For 802.11ac + Bluetooth as well as Continuity/Handoff/AirDrop support, the AzureWave AW-CE123Hcard will work by following the steps in the guide here. Note that it will not be possible to boot from the standard USB installer with this card installed; only install this card after the OS X installation is complete and the necessary modifications have been made.
    – Various other Mini PCIe and USB wifi adapters compatible with OS X are listed here
  3. Press F1 at starup to adjust the BIOS settings as follows:
    – Restart > Load Setup Defaults
    – Config > Power > Power On with AC Attach > Disabled
    – Config > Serial ATA (SATA) > AHCI

    – Security > Memory protection > Execution Prevention > Enabled
    – Startup > UEFI/Legacy Boot > Both
    .
    PART 2 OF 4: CREATE USB INSTALLER
    .
  4. Download the Install OS X El Capitan app from the App Store
  5. Insert an 8GB or larger USB disk
  6. Open Applications > Utilities > Disk Utility
    If you are using Disk Utility under OS X Yosemite or earlier:
    – Select the USB disk in the left pane and select the Partition tab
    – Select  Partition Layout and then choose 1 Partition
    – Select Options… and select GUID Partition Table
    – Under Name: type USB
    – Under Format: select Mac OS Extended (Journaled)
    – Click Apply and then Partition
    Disk UtilityIf you are using Disk Utility under OS X El Capitan:
    – Select the USB disk in the left pane (select the physical disk, not a volume on the disk)
    – Click the Erase button
    – Under Name: type USB
    – Under Format: select Mac OS Extended (Journaled)
    – Under Scheme select GUID Partition Map
    – Click Erase
    Disk Utility El Capitan
  7. Open Applications > Utilities > Terminal and enter the following command:
    sudo /Applications/Install\ OS\ X\ El\ Capitan.app/Contents/Resources/createinstallmedia --volume /Volumes/USB --applicationpath /Applications/Install\ OS\ X\ El\ Capitan.app --no interaction
  8. Enter your password when prompted
  9. After approximately 25-35 minutes the process will finish and the USB will be renamed Install OS X El Capitan (wait for the “copy complete” message in the Terminal before continuing)
  10. Download the ThinkPad X220 OS X El Capitan Utility and Kext Pack and place a copy on your installation USB
  11. Launch Clover Configurator from the Utility and Kext Pack and click the Check Partition button to identify the disk number of assigned to your USB (it will probably be /dev/disk1 or /dev/disk2)
  12. Click the Mount EFI partition button and select the disk number of your USB as identified in the previous step (the disk number will be followed by s1, for example disk1s1)
  13. Quit Clover Configurator and copy the entire EFI folder from the Utility and Kext Pack to the EFI partition of your USB, replacing the existing folder if present.
    The EFI partition on your USB should now contain a single folder named EFI that contains two folders: BOOT and CLOVER. It should look like this:
    X220_EFI_c
  14. Eject your USB
    .
    PART 3 OF 4: INSTALL OS X 10.11 EL CAPITAN
    .
  15. Disconnect any external monitors or other devices and place the USB drive in a USB 2.0 port of your ThinkPad X220. Press F12 at startup to select the USB as your boot drive
  16. Use the arrow keys to select Boot OS X Install from Install OS X El Capitan at the Clover bootloader menu and press Enter
  17. Select your desired language, launch Disk Utility then select your target drive and click Erase
  18. Name the target drive Macintosh HD, select OS X Extended (Journaled) format, GUID Partition Map scheme and click Erase
  19. After the erase process finishes, quit Disk Utility, select Install OS X and follow the prompts to do a standard OS X installation on your target drive
  20. The install process may appear to hang at the end with “About a second remaining.” Just wait – it may take up to an hour finish (if the screen dims you can press a key to wake it)
  21. The computer will eventually restart. When it does, press F12 to select the USB as your boot drive and then select Boot Mac OS X from Macintosh HD at the Clover bootloader menu.
    (If Macintosh HD does not show in the menu just select the Install OS X El Capitan drive again – the installer sometimes triggers a reboot to complete the installation process)
  22. Complete the guided OS X setup
    .
    PART 4 OF 4: POST INSTALL
    .
  23. Go to System Preferences > Security & Privacy and select Allow apps downloaded from: Anywhere
  24. Launch Clover Configurator from the Utility and Kext Pack and click the Check Partition button to identify the disk number of Macintosh HD (it will probably be disk0)
  25. Click the Mount EFI partition button and select the disk number of Macintosh HD as identified in the previous step
  26. The EFI partition for Macintosh HD should now be mounted and show in the sidebar under Devices when you open a Finder window
  27. Quit Clover Configurator
  28. Copy the folders named BOOT and CLOVER from the EFI folder in the Utility and Kext Pack to the EFI folder on the EFI partition of Macintosh HD.
    The EFI partition should now contain a single folder named EFI that contains three folders named APPLE, BOOT and CLOVER. It should look like this:
    X220_EFI_d
  29. Open Utility and Kext Pack > EFI > CLOVER > kexts and launch the script entitled _kext-install.command
  30. Enter your password when prompted and wait for the script to install the kexts
  31. Eject the installation USB and restart the computer
  32. Install any system updates available through the App Store and take a moment to read through the Notes and Suggestions section below
  33. Recommended additional steps to improve battery life with optimized CPU power management:
    1. Confirm that you have an active connection to the Internet
    2. Open Applications > Utilities > Terminal then copy and paste the following command in the Terminal window and hit Enter:
      curl -o ~/ssdtPRGen.sh https://raw.githubusercontent.com/Piker-Alpha/ssdtPRGen.sh/master/ssdtPRGen.sh
    3. Next, paste this command in the Terminal window and hit Enter:
      chmod +x ~/ssdtPRGen.sh
    4. Finally, paste this command in the Terminal window and hit Enter:
      ./ssdtPRGen.sh
    5. Answer ‘N’ to the questions about copying and opening the ssdt files
    6. A customized SSDT.aml for your specific machine will now be in the /Users/yourusername/Library/ssdtPRGen directory
      (quickly access this directory in the Finder by holding the Option key (Windows key) while selecting the Go menu and then selecting Library)
    7. Copy SSDT.aml to /Volumes/EFI/EFI/CLOVER/ACPI/patched/
      (you may need to run Clover Configurator to mount the EFI partition)
    8. Run the Kext Utility app to repair permissions and rebuild the system cache, then restart the computer

 


Notes and Suggestions

  • User Reviews of this Guide
  • General Suggestions
    • Do not encrypt your boot drive with FileVault. Doing so will prevent the system from booting correctly.
    • Keep your El Capitan installation USB on hand. After installing OS X system updates it is sometimes necessary to boot the system with the USB and re-install kexts to get everything working properly again.
    • Use Clover Configurator if you need to mount the EFI partition or make modifications to the providedconfig.plist
    • If a kernel panic occurs, boot from the installation USB, select the installation drive, hit the space bar and select Boot Mac OS X in safe mode to successfully boot and make necessary changes.
    • If the installation process created Macintosh HD as a logical volume rather than a physical volume, you can revert it back to normal by opening Applications > Utilities > Terminal and entering the following two commands:
      • diskutil cs list
      • diskutil coreStorage revert lvUUID
        (replace lvUUID with the last logical volume id string reported by the first Terminal command).
  • Touchpad, TrackPoint and Tablet Input
    • Touchpad and TrackPoint use RehabMan’s VoodooPS2Controller.kext found here.
    • Turn off the touchpad with the <PrtSc> key if you will only be using the TrackPoint – this will prevent an issue with unintentional double-clicks with the TrackPoint buttons.
    • Touchpad supports three-finger swipe right and left (forward and back) in Finder, Safari and other browsers
    • For X220 Tablet models, pen input should already work properly. For touch input, installControllerMate and use the script written by user jakibaki available here. Jakibaki’s script also adds some gestures including swipe from top to get Mission Control, swipe from bottom for Launchpad and swipe from left/right to switch workspaces.
  • Special Keys
    • The <PrtSc> key toggles the touchpad on and off
    • The <ScrLk> and <Pause> keys adjust screen brightness as do the standard <Fn>+<Home> and <Fn>+<End> keys
    • The <Insert> key will eject the CD/DVD drive (attached by USB or docking station)
    • The blue ThinkVantage button will toggle between normal fan speeds and the maximum fan speed
  • Fan Speed and Noise
    • Fan speeds can be reduced by installing the alternate dsdt.aml and ACPIPoller.kext available here
    • Fan noise can also be regulated by changing the BIOS setting under Config > Power > Adaptive Thermal Management to Balanced
  • Video / External Displays
    • To enable scaled resolutions of 1536 x 864 and 1920 x 1080 on the stock LCD panel, follow the steps described here
    • Video output through VGA, DisplayPort and docking stations works normally for single external monitor configurations (internal LCD + one external monitor)
    • If an external monitor is not automatically detected, open System Preferences > Displays and press theOption key (the Windows key on the X220 keyboard). This will show a Detect Displays button which should make the external monitor show up immediately.
    • If DisplayPort or VGA connections on Core i7 systems do not function properly:
      • Launch Clover Configurator and mount the EFI partition of your installation drive
      • Click File > Open… then select EFI > EFI > CLOVER > config.plist
      • Select SMBIOS in the left panel and click the “magic wand” button on the right
      • Select the MacBook Pro image (second image from the left) and then select MacBook Pro (8,1) – Core i5/i7 (Sandy Bridge) from the pulldown menu at the top
      • Click the OK button and then File > Save to write the changes to your config.plist
      • Restart the computer
  • Miscellaneous
    • To enable docking station headphone and microphone ports, use the alternate AppleHDA_20672.kextavailable here. Simply place this alternate version in Utility and Kext Pack > EFI > CLOVER > kexts > Otherand repeat steps 29-31 above.
    • DW1510 wireless cards can be rebranded to identify as native Apple AirPort Extreme cards by following the steps here
    • SuperDuper is an excellent free utility to create a full, bootable backup of your drive that can be restored later if necessary
    • HWSensors provides a convenient way to monitor the status of your system from the menu bar
    • If the Bluetooth radio is turned off in Windows or Linux it may no longer show up when booting into OS X. Boot back into Windows or Linux to turn the Bluetooth radio back on.
    • If FaceTime or Messages (iMessage) does not work correctly, follow the steps in the guide here (use the MacBookPro8.1 SMBIOS profile in step 4)
    • Custom “OS X220” desktop wallpaper by user Will is available here
  • Sources / Credits
    • Original source of modified BIOS 1.42 is here
    • Included dsdt.aml, config.plist and kext installation script are from the ThinkPad T420 guide foundhere
    • Guide to editing dsdt.aml with MaciASL can be found here
    • Custom ssdt.aml script source is here
  • Not functioning
    • SD Card reader
    • Fingerprint reader
    • Microphone mute button

ref: http://x220.mcdonnelltech.com/

Linux Monitoring Tools

Command Line Tools

Top

Top
This is a small tool which is pre-installed on many unix systems. When you want an overview of all the processes or threads running in the system: top is a good tool. Order processes on different criteria – the default of which is CPU.

htop

Top
Htop is essentially an enhanced version of top. It’s easier to sort by processes. It’s visually easier to understand and has built in commands for common things you would like to do. Plus it’s fully interactive.

atop

Atop monitors all processes much like top and htop, unlike top and htop however it has daily logging of the processes for long-term analysis. It also shows resource consumption by all processes. It will also highlight resources that have reached a critical load.

apachetop

Apachetop monitors the overall performance of your apache webserver. It’s largely based on mytop. It displays current number of reads, writes and the overall number of requests processed.

ftptop

ftptop gives you basic information of all the current ftp connections to your server such as the total amount of sessions, how many are uploading and downloading and who the client is.

mytop

Top
mytop is a neat tool for monitoring threads and performance of mysql. It gives you a live look into the database and what queries it’s processing in real time.

powertop

Top
powertop helps you diagnose issues that has to do with power consumption and power management. It can also help you experiment with power management settings to achieve the most efficient settings for your server. You switch tabs with the tab key.

iotop

Top
iotop checks the I/O usage information and gives you a top-like interface to that. It displays columns on read and write and each row represents a process. It also displays the percentage of time the process spent while swapping in and while waiting on I/O.

Desktop Monitoring

ntopng

Top
ntopng is the next generation of ntop and the tool provides a graphical user interface via the browser for network monitoring. It can do stuff such as: geolocate hosts, get network traffic and show ip traffic distribution and analyze it.

iftop

Top
iftop is similar to top, but instead of mainly checking for cpu usage it listens to network traffic on selected network interfaces and displays a table of current usage. It can be handy for answering questions such as “Why on earth is my internet connection so slow?!”.

jnettop

Top
jnettop visualises network traffic in much the same way as iftop does. It also supports customizable text output and a machine-friendly mode to support further analysis.

bandwidthd

Top
BandwidthD tracks usage of TCP/IP network subnets and visualises that in the browser by building a html page with graphs in png. There is a database driven system that supports searching, filtering, multiple sensors and custom reports.

EtherApe

EtherApe displays network traffic graphically, the more talkative the bigger the node. It either captures live traffic or can read it from a tcpdump. The displayed can also be refined using a network filter with pcap syntax.

ethtool

Top
ethtool is used for displaying and modifying some parameters of the network interface controllers. It can also be used to diagnose Ethernet devices and get more statistics from the devices.

NetHogs

Top
NetHogs breaks down network traffic per protocol or per subnet. It then groups by process. So if there’s a surge in network traffic you can fire up NetHogs and see which process is causing it.

iptraf

Top
iptraf gathers a variety of metrics such as TCP connection packet and byte count, interface statistics and activity indicators, TCP/UDP traffic breakdowns and station packet and byte counts.

ngrep

Top
ngrep is grep but for the network layer. It’s pcap aware and will allow to specify extended regular or hexadecimal expressions to match against packets of .

MRTG

Top
MRTG was orginally developed to monitor router traffic, but now it’s able to monitor other network related things as well. It typically collects every five minutes and then generates a html page. It also has the capability of sending warning emails.

bmon

Top
Bmon monitors and helps you debug networks. It captures network related statistics and presents it in human friendly way. You can also interact with bmon through curses or through scripting.

traceroute

Top
Traceroute is a built-in tool for displaying the route and measuring the delay of packets across a network.

IPTState

IPTState allows you to watch where traffic that crosses your iptables is going and then sort that by different criteria as you please. The tool also allows you to delete states from the table.

darkstat

Top
Darkstat captures network traffic and calculates statistics about usage. The reports are served over a simple HTTP server and gives you a nice graphical user interface of the graphs.

vnStat

Top
vnStat is a network traffic monitor that uses statistics provided by the kernel which ensures light use of system resources. The gathered statistics persists through system reboots. It has color options for the artistic sysadmins.

netstat

Top
Netstat is a built-in tool that displays TCP network connections, routing tables and a number of network interfaces. It’s used to find problems in the network.

ss

Instead of using netstat, it’s however preferable to use ss. The ss command is capable of showing more information than netstat and is actually faster. If you want a summary statistics you can use the command ss -s.

nmap

Top
Nmap allows you to scan your server for open ports or detect which OS is being used. But you could also use this for SQL injection vulnerabilities, network discovery and other means related to penetration testing.

MTR


MTR combines the functionality of traceroute and the ping tool into a single network diagnostic tool. When using the tool it will limit the number hops individual packets has to travel while also listening to their expiry. It then repeats this every second.

tcpdump


tcpdump will output a description of the contents of the packet it just captured which matches the expression that you provided in the command. You can also save the this data for further analysis.

Justniffer


Justniffer is a tcp packet sniffer. You can choose whether you would like to collect low-level data or high-level data with this sniffer. It also allows you to generate logs in customizable way. You could for instance mimic the access log that apache has.

Infrastructure Monitoring

Server Density


Our server monitoring tool! It has a web interface that allows you to set alerts and view graphs for all system and network metrics. You can also set up monitoring of websites whether they are up or down. Server Density allows you to set permissions for users and you can extend your monitoring with our plugin infrastructure or api. The service already supports Nagios plugins.

OpenNMS


OpenNMS has four main functional areas: event management and notifications; discovery and provisioning; service monitoring and data collection. It’s designed to be customizable to work in a variety of network environments.

SysUsage


SysUsage monitors your system continuously via Sar and other system commands. It also allows notifications to alarm you once a threshold is reached. SysUsage itself can be run from a centralized place where all the collected statistics are also being stored. It has a web interface where you can view all the stats.

brainypdm


brainypdm is a data management and monitoring tool that has the capability to gather data from nagios or another generic source to make graphs. It’s cross-platform, has custom graphs and is web based.

PCP


PCP has the capability of collating metrics from multiple hosts and does so efficiently. It also has a plugin framework so you can make it collect specific metrics that is important to you. You can access graph data through either a web interface or a GUI. Good for monitoring large systems.

KDE system guard


This tool is both a system monitor and task manager. You can view server metrics from several machines through the worksheet and if a process needs to be killed or if you need to start a process it can be done within KDE system guard.

Munin


Munin is both a network and a system monitoring tool which offers alerts for when metrics go beyond a given threshold. It uses RRDtool to create the graphs and it has web interface to display these graphs. Its emphasis is on plug and play capabilities with a number of plugins available.

Nagios


Nagios is system and network monitoring tool that helps you monitor monitor your many servers. It has support for alerting for when things go wrong. It also has many plugins written for the platform.

Zenoss


Zenoss provides a web interface that allows you to monitor all system and network metrics. Moreover it discovers network resources and changes in network configurations. It has alerts for you to take action on and it supports the Nagios plugins.

Cacti


(And one for luck!) Cacti is network graphing solution that uses the RRDtool data storage. It allows a user to poll services at predetermined intervals and graph the result. Cacti can be extended to monitor a source of your choice through shell scripts.

Zabbix

Zabbix Monitoring
Zabbix is an open source infrastructure monitoring solution. It can use most databases out there to store the monitoring statistics. The Core is written in C and has a frontend in PHP. If you don’t like installing an agent, Zabbix might be an option for you.

nmon


nmon either outputs the data on screen or saves it in a comma separated file. You can display CPU, memory, network, filesystems, top processes. The data can also be added to a RRD database for further analysis.

conky


Conky monitors a plethora of different OS stats. It has support for IMAP and POP3 and even support for many popular music players! For the handy person you could extend it with your own scripts or programs using Lua.

Glances


Glances monitors your system and aims to present a maximum amount of information in a minimum amount of space. It has the capability to function in a client/server mode as well as monitoring remotely. It also has a web interface.

saidar


Saidar is a very small tool that gives you basic information about your system resources. It displays a full screen of the standard system resources. The emphasis for saidar is being as simple as possible.

RRDtool


RRDtool is a tool developed to handle round-robin databases or RRD. RRD aims to handle time-series data like CPU load, temperatures etc. This tool provides a way to extract RRD data in a graphical format.

monit


Monit has the capability of sending you alerts as well as restarting services if they run into trouble. It’s possible to perform any type of check you could write a script for with monit and it has a web user interface to ease your eyes.

Linux process explorer

linux-process-monitor
Linux process explorer is akin to the activity monitor for OSX or the windows equivalent. It aims to be more usable than top or ps. You can view each process and see how much memory usage or CPU it uses.

df


df is an abbreviation for disk free and is pre-installed program in all unix systems used to display the amount of available disk space for filesystems which the user have access to.

discus


Discus is similar to df however it aims to improve df by making it prettier using fancy features as colors, graphs and smart formatting of numbers.

xosview


xosview is a classic system monitoring tool and it gives you a simple overview of all the different parts of the including IRQ.

Dstat


Dstat aims to be a replacement for vmstat, iostat, netstat and ifstat. It allows you to view all of your system resources in real-time. The data can then be exported into csv. Most importantly dstat allows for plugins and could thus be extended into areas not yet known to mankind.

Net-SNMP

SNMP is the protocol ‘simple network management protocol’ and the Net-SNMP tool suite helps you collect accurate information about your servers using this protocol.

incron

Incron allows you to monitor a directory tree and then take action on those changes. If you wanted to copy files to directory ‘b’ once new files appeared in directory ‘a’ that’s exactly what incron does.

monitorix

Monitorix is lightweight system monitoring tool. It helps you monitor a single machine and gives you a wealth of metrics. It also has a built-in HTTP server to view graphs and a reporting mechanism of all metrics.

vmstat


vmstat or virtual memory statistics is a small built-in tool that monitors and displays a summary about the memory in the machine.

uptime

This small command that quickly gives you information about how long the machine has been running, how many users currently are logged on and the system load average for the past 1, 5 and 15 minutes.

mpstat


mpstat is a built-in tool that monitors cpu usage. The most common command is using mpstat -P ALL which gives you the usage of all the cores. You can also get an interval update of the CPU usage.

pmap


pmap is a built-in tool that reports the memory map of a process. You can use this command to find out causes of memory bottlenecks.

ps


The ps command will give you an overview of all the current processes. You can easily select all processes using the command ps -A

sar


sar is a part of the sysstat package and helps you to collect, report and save different system metrics. With different commands it will give you CPU, memory and I/O usage among other things.

collectl


Similar to sar collectl collects performance metrics for your machine. By default it shows cpu, network and disk stats but it collects a lot more. The difference to sar is collectl is able to deal with times below 1 second, it can be fed into a plotting tool directly and collectl monitors processes more extensively.

iostat


iostat is also part of the sysstat package. This command is used for monitoring system input/output. The reports themselves can be used to change system configurations to better balance input/output load between hard drives in your machine.

free


This is a built-in command that displays the total amount of free and used physical memory on your machine. It also displays the buffers used by the kernel at that given moment.

/Proc file system


The proc file system gives you a peek into kernel statistics. From these statistics you can get detailed information about the different hardware devices on your machine. Take a look at thefull list of the proc file statistics

GKrellM

GKrellm is a gui application that monitor the status of your hardware such CPU, main memory, hard disks, network interfaces and many other things. It can also monitor and launch a mail reader of your choice.

Gnome system monitor


Gnome system monitor is a basic system monitoring tool that has features looking at process dependencies from a tree view, kill or renice processes and graphs of all server metrics.

Log Monitoring Tools

GoAccess


GoAccess is a real-time web log analyzer which analyzes the access log from either apache, nginx or amazon cloudfront. It’s also possible to output the data into HTML, JSON or CSV. It will give you general statistics, top visitors, 404s, geolocation and many other things.

Logwatch

Logwatch is a log analysis system. It parses through your system’s logs and creates a report analyzing the areas that you specify. It can give you daily reports with short digests of the activities taking place on your machine.

Swatch


Much like Logwatch Swatch also monitors your logs, but instead of giving reports it watches for regular expression and notifies you via mail or the console when there is a match. It could be used for intruder detection for example.

MultiTail


MultiTail helps you monitor logfiles in multiple windows. You can merge two or more of these logfiles into one. It will also use colors to display the logfiles for easier reading with the help of regular expressions.

Network Monitoring

acct or psacct

acct or psacct (depending on if you use apt-get or yum) allows you to monitor all the commands a users executes inside the system including CPU and memory time. Once installed you get that summary with the command ‘sa’.

whowatch

Similar to acct this tool monitors users on your system and allows you to see in real time what commands and processes they are using. It gives you a tree structure of all the processes and so you can see exactly what’s happening.

strace


strace is used to diagnose, debug and monitor interactions between processes. The most common thing to do is making strace print a list of system calls made by the program which is useful if the program does not behave as expected.

DTrace


DTrace is the big brother of strace. It dynamically patches live running instructions with instrumentation code. This allows you to do in-depth performance analysis and troubleshooting. However, it’s not for the weak of heart as there is a 1200 book written on the topic.

webmin


Webmin is a web-based system administration tool. It removes the need to manually edit unix configuration files and lets you manage the system remotely if need be. It has a couple of monitoring modules that you can attach to it.

stat


Stat is a built-in tool for displaying status information of files and file systems. It will give you information such as when the file was modified, accessed or changed.

ifconfig


ifconfig is a built-in tool used to configure the network interfaces. Behind the scenes network monitor tools use ifconfig to set it into promiscuous mode to capture all packets. You can do it yourself with ifconfig eth0 promisc and return to normal mode with `ifconfig eth0 -promisc`.

ulimit


ulimit is a built-in tool that monitors system resources and keeps a limit so any of the monitored resources don’t go overboard. For instance making a fork bomb where a properly configured ulimit is in place would be totally fine.

cpulimit

CPUlimit is a small tool that monitors and then limits the CPU usage of a process. It’s particularly useful to make batch jobs not eat up too many CPU cycles.

lshw


lshw is a small built-in tool extract detailed information about the hardware configuration of the machine. It can output everything from CPU version and speed to mainboard configuration.

w

W is a built-in command that displays information about the users currently using the machine and their processes.

lsof


lsof is a built-in tool that gives you a list of all open files and network connections. From there you can narrow it down to files opened by processes, based on the process name, by a specific user or perhaps kill all processes that belongs to a specific user.

Thanks for your suggestions. It’s an oversight on our part that we’ll have to go back trough and renumber all the headings. In light of that, here’s a short section at the end for some of the Linux monitoring tools recommended by you:

collectd

Collectd is a Unix daemon that collects all your monitoring statistics. It uses a modular design and plugins to fill in any niche monitoring. This way collectd stays as lightweight and customizable as possible.

Observium

Observium is an auto-discovering network monitoring platform supporting a wide range of hardware platforms and operating systems. Observium focuses on providing a beautiful and powerful yet simple and intuitive interface to the health and status of your network.

Nload

It’s a command line tool that monitors network throughput. It’s neat because it visualizes the in and and outgoing traffic using two graphs and some additional useful data like total amount of transferred data. You can install it with

yum install nload

or

sudo apt-get install nload

SmokePing

SmokePing keeps track of the network latencies of your network and it visualises them too. There are a wide range of latency measurement plugins developed for SmokePing. If a GUI is important to you it’s there is an ongoing development to make that happen.

MobaXterm

If you’re working in windows environment day in and day out. You may feel limited by the terminal Windows provides. MobaXterm comes to the rescue and allows you to use many of the terminal commands commonly found in Linux. Which will help you tremendously in your monitoring needs!

Shinken monitoring

Shinken is a monitoring framework which is a total rewrite of Nagios in python. It aims to enhance flexibility and managing a large environment. While still keeping all your nagios configuration and plugins.

Synapse Based SSH Client

Many times, I needed a way to let my Delphi/FPC applications to connect to an SSH server, execute some commands, and get its results. Now I’m publishing a simple class based on Synapse’s TTelnetSend class to do exactly what I needed.

Required ingredients

First of all, you’ll need to grab a copy of the latest version of Synapse, now, to connect to an SSH server, the connection must be established by using the SSL protocol, and Synapse allows that kind of connections with plugins that allow filtering the data through OpenSSL, CryptLib and StreamSecII. Here, I’ll use CryptLib, so you’ll have to get this compiled version of cl32.dll for Windows, if you need the library compiled for Linux search for it in your repository (or use Google).

Now, configure the search paths of your compiler to find both, Synapse source and cryptlib.pas, the wrapper for cl32.dll.

Introducing TSSHClient class

This is a simple class to let you connect to an SSH server in an Object Oriented way, its internal parts where found in the Synapse’s newsgroups, tested and arranged in a class.

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
unit sshclient;
interface
uses
  tlntsend, ssl_openssl, ssl_openssl_lib, ssl_cryptlib;
type
  TSSHClient = class
  private
    FTelnetSend: TTelnetSend;
  public
    constructor Create(AHost, APort, AUser, APass: string);
    destructor Destroy; override;
    procedure SendCommand(ACommand: string);
    procedure LogOut;
    function ReceiveData: string;
    function LogIn: Boolean;
  end;
implementation
{ TSSHClient }
constructor TSSHClient.Create(AHost, APort, AUser, APass: string);
begin
  FTelnetSend := TTelnetSend.Create;
  FTelnetSend.TargetHost := AHost;
  FTelnetSend.TargetPort := APort;
  FTelnetSend.UserName := AUser;
  FTelnetSend.Password := APass;
end;
destructor TSSHClient.Destroy;
begin
  FTelnetSend.Free;
  inherited;
end;
function TSSHClient.LogIn: Boolean;
begin
  Result := FTelnetSend.SSHLogin;
end;
procedure TSSHClient.LogOut;
begin
  FTelnetSend.Logout;
end;
function TSSHClient.ReceiveData: string;
var
  lPos: Integer;
begin
  Result := '';
  lPos := 1;
  while FTelnetSend.Sock.CanRead(1000) or(FTelnetSend.Sock.WaitingData>0) do
  begin
    FTelnetSend.Sock.RecvPacket(1000);
    Result := Result + Copy(FTelnetSend.SessionLog, lPos, Length(FTelnetSend.SessionLog));
    lPos := Length(FTelnetSend.SessionLog)+1;
  end;
end;
procedure TSSHClient.SendCommand(ACommand: string);
begin
  FTelnetSend.Send(ACommand + #13);
end;
end.

A sample application

This is an example of how to execute an “df -h” command in an external SSH server, inspirated by a question in StackOverflow.

The example just connects to a server, execute the command and capture its output, just that.

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
program TestSSHClient;
{$APPTYPE CONSOLE}
uses
  sshclient;
var
  lSSh: TSSHClient;
begin
  lSSh := TSSHClient.Create('[TARGET_HOST_OR_IP_ADDRESS]','[PORT]', '[USER]', '[PASSWORD]');
  if lSSh.LogIn then
  begin
    Writeln('Connected!.');
    (* Get welcome message *)
    Writeln(lSSh.ReceiveData);
    (* Send command *)
    lSSh.SendCommand('df -h');
    (* Receive results *)
    Writeln(lSSh.ReceiveData);
    lSSh.LogOut;
    Writeln('Logged out.');
  end
  else
    Writeln('Can''t connect.');
  lSSh.Free;
end.                           

Replace the words between ‘[‘ and ‘]’ by the real ones and test it.

ref : http://leonardorame.blogspot.com/2010/01/synapse-based-ssh-client.html

TDocVariant custom variant type

With revision 1.18 of the framework, we just introduced two new custom types of variants:

  • TDocVariant kind of variant;
  • TBSONVariant kind of variant.

The second custom type (which handles MongoDB-specific extensions – like ObjectID or other specific types like dates or binary) will be presented later, when dealing with MongoDBsupport in mORMot, together with the BSON kind of content. BSON / MongoDB support is implemented in the SynMongoDB.pas unit.

We will now focus on TDocVariant itself, which is a generic container of JSON-like objects or arrays.
This custom variant type is implemented in SynCommons.pas unit, so is ready to be used everywhere in your code, even without any link to the mORMot ORM kernel, or MongoDB.

TDocVariant documents

TDocVariant implements a custom variant type which can be used to store any JSON/BSON document-based content, i.e. either:

  • Name/value pairs, for object-oriented documents;
  • An array of values (including nested documents), for array-oriented documents;
  • Any combination of the two, by nesting TDocVariant instances.

Here are the main features of this custom variant type:

  • DOM approach of any object or array documents;
  • Perfect storage for dynamic value-objects content, with a schema-less approach (as you may be used to in scripting languages like Python or JavaScript);
  • Allow nested documents, with no depth limitation but the available memory;
  • Assignment can be either per-value (default, safest but slower when containing a lot of nested data), or per-reference (immediate reference-counted assignment);
  • Very fast JSON serialization / un-serialization with support of MongoDB-like extended syntax;
  • Access to properties in code, via late-binding (including almost no speed penalty due to our VCL hack as already detailed);
  • Direct access to the internal variant names and values arrays from code, by trans-typing into a TDocVariantData record;
  • Instance life-time is managed by the compiler (like any other variant type), without the need to use interfaces or explicit try..finally blocks;
  • Optimized to use as little memory and CPU resource as possible (in contrast to most other libraries, it does not allocate one class instance per node, but rely on pre-allocated arrays);
  • Opened to extension of any content storage – for instance, it will perfectly integrate with BSON serialization and custom MongoDB types (ObjectID, RegEx…), to be used in conjunction with MongoDB servers;
  • Perfectly integrated with our Dynamic array wrapper and its JSON serialization as with the record serialization;
  • Designed to work with our mORMot ORM: any TSQLRecord instance containing suchvariant custom types as published properties will be recognized by the ORM core, and work as expected with any database back-end (storing the content as JSON in a TEXT column);
  • Designed to work with our mORMot SOA: any interface-based service is able to consume or publish such kind of content, as variant kind of parameters;
  • Fully integrated with the Delphi IDE: any variant instance will be displayed as JSON in the IDE debugger, making it very convenient to work with.

To create instances of such variant, you can use some easy-to-remember functions:

  • _Obj() _ObjFast() global functions to create a variant object document;
  • _Arr() _ArrFast() global functions to create a variant array document;
  • _Json() _JsonFast() _JsonFmt() _JsonFastFmt() global functions to create anyvariant object or array document from JSON, supplied either with standard orMongoDB-extended syntax.

Variant object documents

With _Obj(), an objectvariant instance will be initialized with data supplied two by two, asName,Value pairs, e.g.

var V1,V2: variant; // stored as any variant
 ...
  V1 := _Obj(['name','John','year',1972]);
  V2 := _Obj(['name','John','doc',_Obj(['one',1,'two',2.5])]); // with nested objects

Then you can convert those objects into JSON, by two means:

  • Using the VariantSaveJson() function, which return directly one UTF-8 content;
  • Or by trans-typing the variant instance into a string (this will be slower, but is possible).
 writeln(VariantSaveJson(V1)); // explicit conversion into RawUTF8
 writeln(V1);                  // implicit conversion from variant into string// both commands will write '{"name":"john","year":1982}'
 writeln(VariantSaveJson(V2)); // explicit conversion into RawUTF8
 writeln(V2);                  // implicit conversion from variant into string// both commands will write '{"name":"john","doc":{"one":1,"two":2.5}}'

As a consequence, the Delphi IDE debugger is able to display such variant values as their JSON representation.
That is, V1 will be displayed as '"name":"john","year":1982' in the IDE debugger Watch List window, or in the Evaluate/Modify (F7) expression tool.
This is pretty convenient, and much more user friendly than any class-based solution (which requires the installation of a specific design-time package in the IDE).

You can access to the object properties via late-binding, with any depth of nesting objects, in your code:

 writeln('name=',V1.name,' year=',V1.year);
 // will write 'name=John year=1972'
 writeln('name=',V2.name,' doc.one=',V2.doc.one,' doc.two=',doc.two);
 // will write 'name=John doc.one=1 doc.two=2.5
 V1.name := 'Mark';       // overwrite a property value
 writeln(V1.name);        // will write 'Mark'
 V1.age := 12;            // add a property to the object
 writeln(V1.age);         // will write '12'

Note that the property names will be evaluated at runtime only, not at compile time.
For instance, if you write V1.nome instead of V1.name, there will be no error at compilation, but an EDocVariant exception will be raised at execution (unless you set thedvoReturnNullForUnknownProperty option to _Obj/_Arr/_Json/_JsonFmt which will return a null variant for such undefined properties).

In addition to the property names, some pseudo-methods are available for such objectvariant instances:

  writeln(V1._Count); // will write 3 i.e. the number of name/value pairs in the object document
  writeln(V1._Kind);  // will write 1 i.e. ord(sdkObject)for i := 0 to V2._Count-1 do
    writeln(V2.Name(i),'=',V2.Value(i));
  // will write in the console://  name=John//  doc={"one":1,"two":2.5}//  age=12if V1.Exists('year') then
    writeln(V1.year);

You may also trans-type your variant instance into a TDocVariantData record, and access directly to its internals.
For instance:

 TDocVariantData(V1).AddValue('comment','Nice guy');
 with TDocVariantData(V1) do// direct transtypingif Kind=sdkObject then// direct access to the TDocVariantDataKind fieldfor i := 0 to Count-1 do// direct access to the Count: integer field
     writeln(Names[i],'=',Values[i]);    // direct access to the internal storage arrays

By definition, trans-typing via a TDocVariantData record is slightly faster than using late-binding.
But you must ensure that the variant instance is really a TDocVariant kind of data before transtyping e.g. by calling DocVariantType.IsOfType(aVariant).

Variant array documents

With _Arr(), an arrayvariant instance will be initialized with data supplied as a list ofValue1,Value2,…, e.g.

var V1,V2: variant; // stored as any variant
 ...
  V1 := _Arr(['John','Mark','Luke']);
  V2 := _Obj(['name','John','array',_Arr(['one','two',2.5])]); // as nested array

Then you can convert those objects into JSON, by two means:

  • Using the VariantSaveJson() function, which return directly one UTF-8 content;
  • Or by trans-typing the variant instance into a string (this will be slower, but is possible).
 writeln(VariantSaveJson(V1));
 writeln(V1);  // implicit conversion from variant into string// both commands will write '["John","Mark","Luke"]'
 writeln(VariantSaveJson(V2));
 writeln(V2);  // implicit conversion from variant into string// both commands will write '{"name":"john","array":["one","two",2.5]}'

As a with any object document, the Delphi IDE debugger is able to display such arrayvariant values as their JSON representation.

Late-binding is also available, with a special set of pseudo-methods:

  writeln(V1._Count); // will write 3 i.e. the number of items in the array document
  writeln(V1._Kind);  // will write 2 i.e. ord(sdkArray)for i := 0 to V1._Count-1 do
    writeln(V1.Value(i),':',V2._(i));
  // will write in the console://  John John//  Mark Mark//  Luke Lukeif V1.Exists('John') then
    writeln('John found in array');

Of course, trans-typing into a TDocVariantData record is possible, and will be slightly faster than using late-binding.

Create variant object or array documents from JSON

With _Json() or _JsonFmt(), either a document or arrayvariant instance will be initialized with data supplied as JSON, e.g.

var V1,V2,V3,V4: variant; // stored as any variant
 ...
  V1 := _Json('{"name":"john","year":1982}'); // strict JSON syntax
  V2 := _Json('{name:"john",year:1982}');     // with MongoDB extended syntax for names
  V3 := _Json('{"name":?,"year":?}',[],['john',1982]);
  V4 := _JsonFmt('{%:?,%:?}',['name','year'],['john',1982]);
  writeln(VariantSaveJSON(V1));
  writeln(VariantSaveJSON(V2));
  writeln(VariantSaveJSON(V3));
  // all commands will write '{"name":"john","year":1982}'

Of course, you can nest objects or arrays as parameters to the _JsonFmt() function.

The supplied JSON can be either in strict JSON syntax, or with the MongoDB extended syntax, i.e. with unquoted property names.
It could be pretty convenient and also less error-prone when typing in the Delphi code to forget about quotes around the property names of your JSON.

Note that TDocVariant implements an open interface for adding any custom extensions to JSON: for instance, if the SynMongoDB.pas unit is defined in your application, you will be able to create any MongoDB specific types in your JSON, like ObjectID(), new Date() or even /regex/option.

As a with any object or array document, the Delphi IDE debugger is able to display suchvariant values as their JSON representation.

Per-value or per-reference

By default, the variant instance created by _Obj() _Arr() _Json() _JsonFmt() will use a copy-by-value pattern.
It means that when an instance is affected to another variable, a new variant document will be created, and all internal values will be copied. Just like a record type.

This will imply that if you modify any item of the copied variable, it won’t change the original variable:

var V1,V2: variant;
 ...
 V1 := _Obj(['name','John','year',1972]);
 V2 := V1;                // create a new variant, and copy all values
 V2.name := 'James';      // modifies V2.name, but not V1.name
 writeln(V1.name,' and ',V2.name);
 // will write 'John and James'

As a result, your code will be perfectly safe to work with, since V1 and V2 will be uncoupled.

But one drawback is that passing such a value may be pretty slow, for instance, when you nest objects:

var V1,V2: variant;
 ...
 V1 := _Obj(['name','John','year',1972]);
 V2 := _Arr(['John','Mark','Luke']);
 V1.names := V2; // here the whole V2 array will be re-allocated into V1.names

Such a behavior could be pretty time and resource consuming, in case of a huge document.

All _Obj() _Arr() _Json() _JsonFmt() functions have an optional TDocVariantOptionsparameter, which allows to change the behavior of the created TDocVariant instance, especially setting dvoValueCopiedByReference.

This particular option will set the copy-by-reference pattern:

var V1,V2: variant;
 ...
 V1 := _Obj(['name','John','year',1972],[dvoValueCopiedByReference]);
 V2 := V1;             // creates a reference to the V1 instance
 V2.name := 'James';   // modifies V2.name, but also V1.name
 writeln(V1.name,' and ',V2.name);
 // will write 'James and James'

You may think this behavior is somewhat weird for a variant type. But if you forget aboutper-value objects and consider those TDocVariant types as a Delphi class instance (which is a per-reference type), without the need of having a fixed schema nor handling manually the memory, it will probably start to make sense.

Note that a set of global functions have been defined, which allows direct creation of documents with per-reference instance lifetime, named _ObjFast() _ArrFast() _JsonFast() _JsonFmtFast().
Those are just wrappers around the corresponding _Obj() _Arr() _Json() _JsonFmt()functions, with the following JSON_OPTIONS[true] constant passed as options parameter:

const/// some convenient TDocVariant options// - JSON_OPTIONS[false] is _Json() and _JsonFmt() functions default// - JSON_OPTIONS[true] are used by _JsonFast() and _JsonFastFmt() functions
  JSON_OPTIONS: array[Boolean] of TDocVariantOptions = (
    [dvoReturnNullForUnknownProperty],
    [dvoReturnNullForUnknownProperty,dvoValueCopiedByReference]);

When working with complex documents, e.g. with BSON / MongoDB documents, almost all content will be created in “fast” per-reference mode.

Advanced TDocVariant process

Object or array document creation options

As stated above, a TDocVariantOptions parameter enables to define the behavior of aTDocVariant custom type for a given instance.
Please refer to the documentation of this set of options to find out the available settings. Some are related to the memory model, other to case-sensitivity of the property names, other to the behavior expected in case of non-existing property, and so on…

Note that this setting is local to the given variant instance.

In fact, TDocVariant does not force you to stick to one memory model nor a set of global options, but you can use the best pattern depending on your exact process.
You can even mix the options – i.e. including some objects as properties in an object created with other options – but in this case, the initial options of the nested object will remain. So you should better use this feature with caution.

You can use the _Unique() global function to force a variant instance to have an unique set of options, and all nested documents to become by-value, or _UniqueFast() for all nested documents to become by-reference.

// assuming V1='{"name":"James","year":1972}' created by-reference
  _Unique(V1);             // change options of V1 to be by-value
  V2 := V1;                // creates a full copy of the V1 instance
  V2.name := 'John';       // modifies V2.name, but not V1.name
  writeln(V1.name);        // write 'James'
  writeln(V2.name);        // write 'John'
  V1 := _Arr(['root',V2]); // created as by-value by default, as V2 was
  writeln(V1._Count);      // write 2
  _UniqueFast(V1);         // change options of V1 to be by-reference
  V2 := V1;
  V1._(1).name := 'Jim';
  writeln(V1);
  writeln(V2);
  // both commands will write '["root",{"name":"Jim","year":1972}]'

The easiest is to stick to one set of options in your code, i.e.:

  • Either using the _*() global functions if your business code does send someTDocVariant instances to any other part of your logic, for further storage: in this case, the by-value pattern does make sense;
  • Or using the _*Fast() global functions if the TDocVariant instances are local to a small part of your code, e.g. used as schema-less Data Transfer Objects (DTO).

In all cases, be aware that, like any class type, the const, var and out specifiers of method parameters does not behave to the TDocVariant value, but to its reference.

Integration with other mORMot units

In fact, whenever a schema-less storage structure is needed, you may use a TDocVariantinstance instead of class or record strong-typed types:

  • Client-Server ORM will support TDocVariant in any of the TSQLRecord variantpublished properties;
  • Interface-based services will support TDocVariant as variant parameters of any method, which make them as perfect DTO;
  • Since JSON support is implemented with any TDocVariant value from the ground up, it makes a perfect fit for working with AJAX clients, in a script-like approach;
  • If you use our SynMongoDB.pas unit to access a MongoDB server, TDocVariant will be the native storage to create or access BSON arrays or objects documents;
  • Cross-cutting features (like logging or record / dynamic array enhancements) will also benefit from this TDocVariant custom type.

We are pretty convinced that when you will start playing with TDocVariant, you won’t be able to live without it any more.
It introduces the full power of late-binding and schema-less patterns to your application, which can be pretty useful for prototyping or in Agile development.
You do not need to use scripting engines like Python or JavaScript to have this feature, if you need it.

ref: http://blog.synopse.info/post/2014/02/25/TDocVariant-custom-variant-type

Connecting to legacy databases and publishing a RESTful interface to it

Most systems, especially in the DDD area, needs to integrate to a legacy system. In our case our we had to communicate to a Firebird 1.5 database.

The first step was to define our Data Transfer Objects:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
type
  TLegacyID = type RAWUTF8;
  TLegacyAccount = class(TSynPersistent)
  private
    fLegacyID: TLegacyID;
    fDateModified: TDateTime;
    fUserCreated: RAWUTF8;
    fDateCreated: TDateTime;
    fName: RAWUTF8;
    fisActive: Boolean;
  public
    constructor Create; overload; override;
    constructor Create( aID : TLegacyID; aName : RAWUTF8; aIsActive : Boolean;
      aDateCreated, aDateModified : TDateTime; aUserCreated : RAWUTF8 ); overload;
  published
    property ID : TLegacyID read fRevelightID write fRevelightID;
    property Name : RAWUTF8 read fName write fName;
    property isActive : Boolean read fisActive write fisActive;
    property DateCreated : TDateTime read fDateCreated write fDateCreated;
    property DateModified : TDateTime read fDateModified write fDateModified;
    property UserCreated : RAWUTF8 read fUserCreated write fUserCreated;
  end;
  TLegacySupplier = class(TLegacyAccount)
  end;

Here we declare a unique type to identify our legacy IDs (strings). We also do a basic map of our data layout, with a customized constructor for ease of creation. One can add other methods later to handle copies and assignments.

The next step was to define our service:

1
2
3
4
5
type
  ILegacyStockQuery = interface(IInvokable)
    ['{2BDC9F78-B9C2-4621-A557-F87F02AC0581}']
    function GetSupplier(const aID: TLegacyID; out Supplier: TLegacySupplier): TCQRSResult;
  end;

we’ll publish a service as LegacyStockQuery with a single method GetSupplier. This method will return a JSON encoded representation of our TLegacySupplier, ready to be consumed by a client.

To implement it:

1
2
3
4
5
6
7
8
9
10
type
  TLegacyStockQuery = class(TInterfacedObject, ILegacyStockQuery)
  private
    fDbConnection : TSQLDBConnectionProperties;
  public
    constructor Create( const aProps: TSQLDBConnectionProperties ); overload;
    function GetSupplier(const aID: TRevelightID; out Supplier: TLegacySupplier): TCQRSResult;
    property DbConnection : TSQLDBConnectionProperties read fDbConnection write fDbConnection;
  end;

We keep a copy of our database connection properties local to our instance to ensure thread safety.

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
{ TLegacyStockQuery }
constructor TLegacyStockQuery.Create(const aProps: TSQLDBConnectionProperties);
begin
  fDbConnection := aProps;
  inherited Create;
end;
function TLegacyStockQuery.GetSupplier(const aID: TLegacyID; out Supplier: TLegacySupplier): TCQRSResult;
var
  Res, Address : ISQLDBRows;
begin
  Result := cqrsNotFound;
  Res := fDbConnection.Execute( 'select * from SUPPLIERS where SUPPLIER_ID=? ', [aID] );
  if Res.Step then begin
    Result := cqrsSuccess;
    Supplier.ID := Res['SUPPLIER_ID'];
    Supplier.Name := Res['SUPPLIER_NAME'];
    Supplier.isActive := Res['ACTIVE_FLAG'] = 'Y';
    Supplier.DateCreated := Res['DATE_CREATED'];
    Supplier.DateModified := Res['DATE_MODIFIED'];
    Supplier.UserCreated := Res['USER_CREATED'];
  end;
end;

Execute the query against the legacy database and populate the DTO. Using the ISQLDBRows interface means less object maintenance and cleaner code.

To kick the whole thing off we have:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
procedure StartServer( aDbURI : RawURF8 );
var
  aDbConnection : TSQLDBConnectionProperties;
  aStockServer  : TSQLRestServerFullMemory;
  aHTTPServer   : TSQLHttpServer;
begin
  aDbConnection := TSQLDBZEOSConnectionProperties.Create( aDbURI, '', '', '' );
  aStockServer  := TSQLRestServerFullMemory.Create([]);
  try
    aStockServer.ServiceDefine( TLegacyStockQuery.Create( aDbConnection ), [ILegacyStockQuery]);
    aHTTPServer := TSQLHttpServer.Create( DEFAULT_HTTP_PORT, [aStockServer] );
    try
      aHttpServer.AccessControlAllowOrigin := '*'; // allow cross-site AJAX queries
      writeln('Background server is running.'#10);
      writeln('Cross-Platform wrappers are available at ',
          DEFAULT_HTTP_PORT ,'/', DEFAULT_SERVER_ROOT );
      write('Press [Enter] to close the server.');
      readln;
    finally
      aHTTPServer.Free;
    end;
  finally
    aStockServer.Free;
  end;
end;

It would be better to use dependency injection here, but we’ll get in to that later.

When invoking it, we used a Zeos DB URI, like this one:

Tips!

Remember to register your Object array types with

TJSONSerializer.RegisterObjArrayForJSON(TypeInfo(<ArrayType>),<ObjType>);

e.g.

TJSONSerializer.RegisterObjArrayForJSON(TypeInfo(TLegacySupplierObjArray),TLegacySupplier);

ref: https://tamingthemormot.wordpress.com/2015/07/14/connecting-to-legacy-databases/

External database speed improvements

Some major speed improvements have been made to our SynDB* units, and how they are used within the mORMot persistence layer.
It results in an amazing speed increase, in some cases.

Here are some of the optimizations how took place in the source code trunk:

Overall, I observed from x2 to x10 performance boost with simple Add() operations, using ODBC, OleDB and direct Oracle access, when compare to previous benchmarks (which were already impressive).
BATCH mode performance is less impacted, since it by-passed some of those limitations, but even in this operation mode, there is some benefits (especially with ODBC and OleDB).

Here are some results, directly generated by the supplied “15 – External DB performance” sample.

Insertion speed

Here we test insertion of some records, for most of our supplied engines.
We did the test with UNIK conditional undefined, i.e. with no index of the Name field.

A Core i7 notebook has been used, as hardware platform.
Oracle 11g database is remotely accessed over a corporate network, so latency and bandwidth is not optimal.
The hardrive is a SSD this time – so we will see how it affects the results.

SQLite3
(file full)
SQLite3
(file off)
SQLite3
(mem)
TObjectList
(static)
TObjectList
(virtual)
SQLite3
(ext file full)
SQLite3
(ext file off)
SQLite3
(ext mem)
Oracle ODBC Oracle Jet
Direct 501 911 81870 281848 288234 548 952 72697 518 512 4159
Batch 523 891 102614 409836 417257 557 868 91617 77155 509 4441
Trans 90388 95884 96612 279579 286188 99681 70950 105674 1024 1432 4920
Batch Trans 110869 117376 125190 412813 398851 127424 126627 121368 62601 1019 4926

Performance gain is impressive, especially for “ODBC Oracle” and also “OleDB Jet”.
Since Jet/MSAccess is a local engine, it is faster than Oracle for one record retrieval – it does not suffer from the network latency. But it is faster than SQlite3 at insertion, due to a multi-thread design – which is perhaps less ACID nor proven.
Note that this hardware configuration run on a SSD, so even “SQLite3 (file full)” configuration is very much boosted – about 3 times faster.
Our direct Oracle access classes achieve more than 77,000 inserts per second in BATCH mode (using the Array Binding feature).
Direct TObjectList in-memory engine reaches amazing speed, when used in BATCH mode – more than 400,000 inserts per second!

Read speed

SQLite3
(file full)
SQLite3
(file off)
SQLite3
(mem)
TObjectList
(static)
TObjectList
(virtual)
SQLite3
(ext file full)
SQLite3
(ext file off)
SQLite3
(ext mem)
Oracle ODBC Oracle Jet
By one 26777 26933 122016 298400 301041 135413 133571 131877 1289 1156 2413
All Virtual 429331 427423 447227 715717 241289 232385 167420 202839 63473 35029 127772
All Direct 443773 433463 427094 711035 700574 432189 334179 340136 90184 39485 186164

Reading speed was also increased. ODBC results have the biggest improvement.
Server-side statement cache for Oracle makes individual reading of records 2 times faster. Wow.
The SQLite3 engine is still the more reactive SQL database here, when it comes to reading.
Of course, direct TObjectList engine is pretty fast – more than 700,000 records per second.

ref: http://blog.synopse.info/post/2013/01/28/External-database-speed-improvements

RESTful mORMot

embarcadero-delphi

Our Synopse mORMot Framework was designed in accordance with Fielding’s REST architectural style without using HTTP and without interacting with the World Wide Web.
Such Systems which follow REST principles are often referred to as “RESTful”.

Optionally, the Framework is able to serve standard HTTP/1.1 pages over the Internet (by using the mORMotHttpClient / mORMotHttpServer units and the TSQLHttpServer andTSQLHttpClient classes), in an embedded low resource and fast HTTP server.

The standard RESTful methods are implemented, i.e. GET/PUT/POST/DELETE.

The following methods were added to the standard REST definition, for locking individual records and for handling database transactions (which speed up database process):

  • LOCK to lock a member of the collection;
  • UNLOCK to unlock a member of the collection;
  • BEGIN to initiate a transaction;
  • END to commit a transaction;
  • ABORT to rollback a transaction.

The GET method has an optional pagination feature, compatible with the YUI DataSource Request Syntax for data pagination – see TSQLRestServer.URI method andhttp://developer.yahoo.com/yui/datatable/#data . Of course, this breaks the “Every Resource is Identified by a Unique Identifier” RESTful principle – but it is much more easy to work with, e.g. to implement paging or custom filtering.

From the Delphi code point of view, a RESTful Client-Server architecture is implemented by inheriting some common methods and properties from a main class.

Then a full set of classes inherit from this TSQLRest abstract parent, e.g. TSQLRestClient TSQLRestClientURI TSQLRestServer.
This TSQLRest class implements therefore a common ancestor for both Client and Server classes.

BLOB fields

BLOB fields are defined as TSQLRawBlob published properties in the classes definition – which is an alias to the RawByteString type (defined in SynCommons.pas for Delphi up to 2007, since it appeared only with Delphi 2009). But their content is not included in standard RESTful methods of the framework, to spare network bandwidth.

The RESTful protocol allows BLOB to be retrieved (GET) or saved (PUT) via a specific URL, like:

 ModelRoot/TableName/TableID/BlobFieldName

This is even better than the standard JSON encoding, which works well but convert BLOB to/from hexadecimal values, therefore need twice the normal size of it. By using such dedicated URL, data can be transfered as full binary.

Some dedicated methods of the generic TSQLRest class handle BLOB fields: RetrieveBloband UpdateBlob.

JSON representation

The “04 – HTTP Client-Server” sample application available in the framework source code tree can be used to show how the framework is AJAX-ready, and can be proudly compared to any other REST server (like CouchDB) also based on JSON.

First deactivates the authentication by changing the parameter from true to false inUnit2.pas:

 DB := TSQLRestServerDB.Create(Model,ChangeFileExt(paramstr(0),'.db3'),
 false);

and by commenting the following line in Project04Client.dpr:

  Form1.Database := TSQLHttpClient.Create(Server,'8080',Form1.Model);
  // TSQLHttpClient(Form1.Database).SetUser('User','synopse');
  Application.Run;

Then you can use your browser to test the JSON content:

  • Start the Project04Server.exe program: the background HTTP server, together with its SQLite3 database engine;
  • Start any Project04Client.exe instances, and add/find any entry, to populate the database a little;
  • Close the Project04Client.exe programs, if you want;
  • Open your browser, and type into the address bar:
      http://localhost:8080/root
    
  • OYou’ll see an error message:
    TSQLHttpServer Server Error 400
    
  • Type into the address bar:
      http://localhost:8080/root/SampleRecord
    
  • You’ll see the result of all SampleRecord IDs, encoded as a JSON list, e.g.
     [{"ID":1},{"ID":2},{"ID":3},{"ID":4}]
    
  • Type into the address bar:
      http://localhost:8080/root/SampleRecord/1
    
  • You’ll see the content of the SampleRecord of ID=1, encoded as JSON, e.g.
    {"ID":1,"Time":"2010-02-08T11:07:09","Name":"AB","Question":"To be or not to be"}
    
  • Type into the address bar any other REST command, and the database will reply to your request…

You have got a full HTTP/SQLite3 RESTful JSON server in less than 400 KB. :)

Note that Internet Explorer or old versions of FireFox do not recognize theapplication/json; charset=UTF-8 content type to be viewed internally. This is a limitation of those softwares, so above requests will download the content as .json files, but won’t prevent AJAX requests to work as expected.

Stateless ORM

Our framework is implementing REST as a stateless protocol, just as the HTTP/1.1 protocol it could use as its communication layer.

A stateless server is a server that treats each request as an independent transaction that is unrelated to any previous request.

At first, you could find it a bit disappointing from a classic Client-Server approach. In a stateless world, you are never sure that your Client data is up-to-date. The only place where the data is safe is the server. In the web world, it’s not confusing. But if you are coming from a rich Client background, this may concern you: you should have the habit of writing some synchronization code from the server to replicate all changes to all its clients. This is not necessary in a stateless architecture any more.

The main rule of this architecture is to ensure that the Server is the only reference, and that the Client is able to retrieve any pending update from the Server side. That is, always modify a record content on a server side, then refresh the client to retrieve the modified value. Donot modify the client side directly, but always pass through the Server. The UI components of the framework follow these principles. Client-side modification could be performed, but must be made in a separated autonomous table/database. This will avoid any synchronization problem in case of concurrent client modification.

A stateless design is also pretty convenient when working with complex solutions.
Even Domain-Driven Design tends to restrain state to its smallest extend possible, since state introduces complexity.

ref:http://blog.synopse.info/post/2014/01/10/RESTful-mORMot

REpresentational State Transfer (REST)

Representational state transfer (REST) is a style of software architecture for distributed hypermedia systems such as the World Wide Web.
As such, it is not just a method for building “web services”. The terms “representational state transfer” and “REST” were introduced in 2000 in the doctoral dissertation of Roy Fielding, one of the principal authors of the Hypertext Transfer Protocol (HTTP) specification, on which the whole Internet rely.


There are 5 basic fundamentals of web which are leveraged to create REST services:

  1. Everything is a Resource;
  2. Every Resource is Identified by a Unique Identifier;
  3. Use Simple and Uniform Interfaces;
  4. Communication is Done by Representation;
  5. Every Request is Stateless.

Resource-based

Internet is all about getting data. This data can be in a format of web page, image, video, file, etc.
It can also be a dynamic output like get customers who are newly subscribed.
The first important point in REST is start thinking in terms of resources rather than physical files.

You access the resources via some URI, e.g.

  • http://www.mysite.com/pictures/logo.png – Image Resource;
  • http://www.mysite.com/index.html – Static Resource;
  • http://www.mysite.com/Customer/1001 – Dynamic Resource returning XML or JSON content;
  • http://www.mysite.com/Customer/1001/Picture – Dynamic Resource returning an image.

Unique Identifier

Older web techniques, e.g. aspx or ColdFusion, did request a resource by specifying parameters, e.g.

 http://www.mysite.com/Default.aspx?a=1;a=2&b=1&a=3

In REST, we add one more constraint to the current URI: in fact, every URI should uniquely represent every item of the data collection.

For instance, you can see the below unique URI format for customer and orders fetched:

Customer data URI
Get Customer details with name “dupont” http://www.mysite.com/Customer/dupont
Get Customer details with name “smith” http://www.mysite.com/Customer/smith
Get orders placed by customer “dupont” http://www.mysite.com/Customer/dupont/Orders
Get orders placed by customer “smith” http://www.mysite.com/Customer/smith/Orders

Here, “dupont” and “smith” are used as unique identifiers to specify a customer.
In practice, a name is far from unique, therefor most systems use an unique ID (like an integer, an hexadecimal number or a GUID).

Interfaces

To access those identified resources, basic CRUD activity is identified by a set of HTTP verbs:

HTTP method Action
GET List the members of the collection (one or several)
PUT Update a member of the collection
POST Create a new entry in the collection
DELETE Delete a member of the collection

Then, at URI level, you can define the type of collection, e.g.http://www.mysite.com/Customer to identify the customers orhttp://www.mysite.com/Customer/1234/Orders to access a given order.

This combinaison of HTTP method and URI replace a list of English-based methods, likeGetCustomer / InsertCustomer / UpdateOrder / RemoveOrder.

By Representation

What you are sending over the wire is in fact a representation of the actual resource data.

The main representation schemes are XML and JSON.

For instance, here is how a customer data is retrieved from a GET method:

 <Customer>
<ID>1234</ID>
<Name>Dupond</Name>
<Address>Tree street</Address>
</Customer>

Below is a simple JSON snippet for creating a new customer record with name and address:

 {Customer: {"Name":"Dupont", "Address":"Tree street"}}

As a result to this data transmitted with a POST command, the RESTful server will return the just-created ID.

Clearness of this format is one of the reasons why in mORMot, we prefer to use JSON format instead of XML or any proprietary format.

Stateless

Every request should be an independent request so that we can scale up using load balancing techniques.

Independent request means with the data also send the state of the request so that the server can carry forward the same from that level to the next level.

ref: http://blog.synopse.info/post/2014/01/10/REpresentational-State-Transfer-%28REST%29