Couchbase and Teiid Bang!

Over the last few months I’ve been creating communities within our consulting teams. There are a lot of consultants that are very interested in JBoss Data Virtualization (a.k.a. Teiid in the JBoss Community)  and over the last few months I’ve been putting together a team of consultants with varying interests to get them familiar with this project/product.

So, for those of you who may not be familiar with Teiid(JDV), it has many powerful features and one of the most powerful is the ability to extend the product. Teiid comes with many translators that can bridge ANSI sql queries to NoSQL, traditional SQL, flat files and more types of data sources, however, some translators don’t exist for all data sources. In comes the community: massive exaggerated CHEERING. queue sound byte.  My team and I have been focused on extending Teiid to be able to make ANSI SQL translations between Couchbase, which itself is a layer on top of Couchdb. There is a thread as to why I ended up choosing Couch.

Also we choose Couchbase primarily from its ease of use. Couchbase comes with NQL, which is a very SQLish like api, so doing the translations from SQL to NQL is much simpler rather than working with a who new api with lots of switches and exceptions and most important having to learn it all before we start. My main goal was/is to build a foundation where we can branch out into more complicated data sources.

Here we are. It has taken a bit of time to get the project stood up in a repeatable fashion, such that I can add new team members without much time for set up. We have docker images set up for JDV, and Couchbase so it is a matter of just issuing a few docker commands to get the servers up. Currently, we have the resource adapter working and will soon be actually doing the translations with NQL. To get started with this project (please help 🙂 ) you can pull from the main teiid branch.

$ mkdir teiid-couchbase

$ cd teiid-couchbase

$ git init

$ git remote add upstream

$ git checkout -b couchbase upstream/8.12.x

Now you have the source for couchbase translator, now lets set up your environment

$ git clone

You should be able to follow the instructions on standing it up. If you see any issues let me know and do a pull request. That’s it! I’ll update when I have some news about this project.



Docker and Systemd Don’t Jive

What is the hubbub with this, Docker is awesomesauce you may say. Docker is awesomesauce…. for most things and reasonable expectations. Generally I don’t write about all my blunders because I would never have time to get anything done :). This one took the cake though.

I was working through a new security hacking book called Black Hat Python (which I highly recommend) and was happily hacking away when I got to a section where I was testing a basic proxy server. The idea was to use a basic proxy server to redirect traffic from a known service through the proxy to my remote target destination. In order for me to test out the code I wanted to set up a test ftp server. I thought, hey, this is a good opportunity for me to use docker. I’m familiar with docker, but by means an expert. I could have much easier just stood up a vm and been on my way. But nooooo.

I also decided I’d use the latest release from Fedora, Fedora 23 as my base. Again, it crossed my mind to just spin up a cloud image install the package and be on my… Okay, simple tell my DockerFile to install vsftp.


MAINTAINER Jason Marley <>
RUN dnf -y update && dnf clean all
RUN dnf install vsftpd -y

USER root

RUN systemctl enable vsftpd && systemctl start vsftpd

# Expose Ports

CMD ["/bin/bash"]


Super simple. Theoretically, once I start my docker container after the build I should have an ftp service I can set up my proxy to.

$ sudo docker build -t jmarley/fedora_vsftp DockerFile-fvsftp .</pre>


Nope. You’ll get an error like below.


$ sudo docker run -d -t jmarley/fedora_vsftp
Error response from daemon: Cannot start container 2f75e4425e7be7b49183e67e39bfc065c8fbe0523afb214e14c17abef19e6b6b: [8] System error: exec: "/usr/bin/systemctl start vsftpd": stat /usr/bin/systemctl start vsftpd: no such file or directory


And you’re saying, wait a second when I ran my build everything built as I expected with no errors and I know that the service vsftpd installed and is enabled. Here is the deal, since the switch from System V to System D there is the D-Bus service managing the service and if your docker container is doesn’t install and active the necessary services for D-Bus then you’ll see this error. Dan Walsh did a good write up on it as well .

On to fixing, so what do I do. Well you just use a cloud-init image, which probably makes more sense, but if you’re stubborn you’ll keep pressing. The solution is to add another layer that adds all the systemd dependencies so they are available to the container. You can see examples here .

Going through all the effort and searching for the issues I encountered above I should have simply stood up a cloud image and added the necessary services I needed. In my next blog or two I will dive into Fedora’s cloud images and why I like them. Also one of my current community projects is starting to get some steam and wanted to do a series on it as it deals with quite a lot of the stack

Happy Hacking

Hashi Corps Packer… and my QEMU snafu

How cool is Hashi Corps Packer product? I think it is pretty sweet. I primarily work with fedora and RHEL via Qemu, but it is neat that it can use one configuration file to apply the same settings to each platform if I wanted.

Anyways, a colleague brought this up to my attention last week and I thought as I’m going through the install for OpenShift v3, I might as well create a reusable configuration file for each system. And here we are 🙂 .

I wanted to bring this issue I was facing to light in case you’re like me and don’t like to do anything on your host machine, but rather spin something up in a VM server. I was hitting this issue.

2015/11/02 15:52:16 packer-builder-qemu: 2015/11/02 15:52:16 Qemu stderr: Could not initialize SDL(x11 not available) - exiting
2015/11/02 15:52:16 ui error: ==> qemu: Error launching VM: Qemu failed to start. Please run with logs to get more info.
==> qemu: Error launching VM: Qemu failed to start. Please run with logs to get more info.

What does the error ^^^^ mean? When Packer tries to start up Qemu, it tries to spin up a graphic image and since my server is headless I was getting the error above.  How to fix?

"builders": [{
     "qemuargs": [
        [ "-m", "4444M" ],
        [ "-nographic", "" ]

Simply add the code ^^^ to your JSON file and everything will start swimmingly..


automation with purpose…

Every engineer, architect, system administrator and project manager all has to make decisions on automation. Albeit, those decisions will be from different perspectives, but we all have to figure out what makes sense. The main reason I’m writing this blog, is that I have been seeing a lot chatter recently about posteriori (dynamic) configurations and grumblings about performance and I know doing more a priori will improve performance.

When does it make sense to do configurations a priori vice posteriori? Obviously it depends on what you’re trying to do. Some things to consider are,

  • Does the configuration(s) need to be dynamic?

When configuring the environment is there anything that needs to be dynamic? For services that have dynamic configurations, how much of the service configuration is actually dynamic vice static? In other words, if I have a dynamic configuration, but only 2 commands out of 50 need to be done dynamically, I may consider setting up static configurations for 48 commands and only doing 2 commands dynamically. There are diminishing returns for this type of approach so if the example was the other way around, then you could have 48 dynamic configurations and 2 static configurations. In this case depending on the scale will dictate whether to spend time doing both static and dynamic configurations; huge number of machines will make dynamic and static approach more reasonable, because even a tenths of a second for each server start up time gets expensive when the number gets high.

  • How many machines will this affect?

Regardless someone will have to design and create the architecture of the environment, but if there are only a few machines spending time creating a dynamic environment can inhibit productivity and will not be worth while.

  • What are the resource requirements for spinning up the environment?

When you’re creating your test machine how long does it take to configure the environment; this will be approximately how much time it will take each environment to dynamically be created.

  • Does this need to be repeatable?

What is life expediency of the machine and/or services? If the life expediency isn’t long then why spend the additional effort making the entire process automated?

The basic take away is always try to do things a priori and when you can’t, only do the configurations that need to happen dynamically, dynamically. Trying to do everything dynamically will be expensive implementing a repeatable solution and computationally expensive. If you aim to leverage automation with a priori configurations I guarantee that you will improve performance and lower costs.

How to secure bash command line history

Lately I have become of aware of an unsafe practice when maintaining and implementing Jboss EAP servers (or any servers really). The idea is that whatever commands that are executed as a particular user are persisted to the ~/.bash_history file. Of course history can be disabled for particular users, but not all system administrators take this into consideration and hence the concern.

Why is this dangerous

If there is a security breach into a system, say with sudo privileges, the attacker will not only have access into the current system, but also bash_history files. The bash_history wrt JBoss EAP, is that the server uses several cli tooling to create server certificates, management users, vaults, and vaulted attributes and if care isn’t taken can lead to information leaks.


As a user I create a keystore, vault and add secure vault attributes

Login as good guy

[root@mybox ~]# sudo su - JbossAdm

Create keystore

[JbossAdm@mybox jboss-eap-6.3]$ keytool -genseckey -v -alias vaultAlias -keyalg AES -keysize 128 -storetype jceks -dname "CN=something" -keypass vaultKeyPass -keystore vault.jks -storepass vaultKeyPass

Create vault

[JbossAdm@mybox jboss-eap-6.3]$ ./bin/ --keystore vault.jks --keystore-password vaultKeyPass --alias vaultAlias --salt 12457898 --iteration 15 -c

Create vault attribute

[JbossAdm@mybox jboss-eap-6.3]$ ./bin/ --keystore vault.jks --keystore-password vaultKeyPass --alias vaultAlias --salt 12457898 --iteration 15 --attribute certKeystorePass --sec-attr hostKeyPass

Add management user

[JbossAdm@mybox jboss-eap-6.3]$ ./bin/ --silent -u admin -p admin.2015

Nefarious user Login

It is fair to say, having root access alone does not guarantee visability into the passwords of vault, keystore or attributes. Look how easy it is to get passwords for JBoss EAP and server certificate store. With this information they can do untold damange.

Interogate bash history for keywords

[root@mybox ~]# grep '.*vault.*\|.*keytool.*\|.*add-user.*' /home/JbossAdm/.bash_history
./bin/ --silent -u admin -p admin.2015
keytool -genseckey -v -alias vaultAlias -keyalg AES -keysize 128 -storetype jceks -dname "CN=something" -keypass vaultKeyPass -keystore vault.jks -storepass vaultKeyPass
./bin/ --keystore vault.jks --keystore-password vaultKeyPass --alias vaultAlias --salt 12457898 --iteration 15 -c
./bin/ --keystore vault.jks --keystore-password vaultKeyPass --alias vaultAlias --salt 12457898 --iteration 15 --attribute certKeystorePass --sec-attr hostKeyPass

How to prevent (at least slow down nefarious acts)

Since this scenario is in a data center that has been around for some time and who knows what has been run prior to this box, first state of affairs to clear history.

Remove only suspect values

This looks complicated, but it cats the history file, reverses order then greps for key values and the cuts the number associated with, then loops through and deletes each of the records found.

[JbossAdm@mybox ~]$ histnum=$(history | tac | grep '.*vault.*\|.*keytool.*\|.*add-user.*|.*jboss-cli.*' | sed 's/^[ ]*//;s/[ ].*//;p')
[JbossAdm@mybox ~]$ for del in $histnum; do  history -d $del; done
[JbossAdm@mybox jboss-eap-6.3]$ history -w
tac is needed other wise the ordering is inaccurate after the first removal and will extend past the total number of elements (in some cases), but with reverse order always starts with the last value and works to smallest ensuring they are always there.

Alternately, we could clear current history

[JbossAdm@mybox jboss-eap-6.3]$ history -c
[JbossAdm@mybox jboss-eap-6.3]$ history -w


Now, we can do this each time we have a server configured, but the ideal solution is to have it built as part of the VM profile so it is handled automatically. For more information on HISTIGNORE visit reference [1].

[JbossAdm@mybox jboss-eap-6.3]$ HISTIGNORE='keytool[ ]*':'*vault*':'*add-user*':'*jboss-cli*'
[JbossAdm@mybox jboss-eap-6.3]$ export HISTIGNORE

Execute commands again

I am supressing the output, because it really isn’t important to this example.

[JBossAdm@mybox jboss-eap-6.3]$ keytool -genseckey -v -alias vaultAlias -keyalg AES -keysize 128 -storetype jceks -dname "CN=something" -keypass vaultKeyPass -keystore vault.jks -storepass vaultKeyPass
[Storing vault.jks]
[JbossAdm@mybox jboss-eap-6.3]$ ./bin/ --keystore vault.jks --keystore-password vaultKeyPass --alias vaultAlias --salt 12457898 --iteration 15 -c
[JbossAdm@mybox jboss-eap-6.3]$ ./bin/ --keystore vault.jks --keystore-password vaultKeyPass --alias vaultAlias --salt 12457898 --iteration 15 --attribute certKeystorePass --sec-attr hostKeyPass
[JbossAdm@mybox jboss-eap-6.3]$ ./bin/ --silent -u admin -p admin.2015

list history

As we expect we see no values with vault, keytool or add-user

[root@mybox ~]# grep '.*vault.*\|.*keytool.*\|.*add-user.*' /home/JbossAdm/.bash_history


For the experienced and dedicated it is merly a road bump, but that is okay. This is merly meant to slow down and deter any nefarious activity.

onetab saved my life and almost made me cry

A colleague of mine saw I had a bazillion tabs open and was like man you have a lot of tabs open, I was like well I don’t want to forget what I was working on when I get pulled in another direction and curiously I asked him because I was sure he had as many if not more and he said 1000 tabs. I was like how is that possible and he responded onetab. Onetab was originally meant for chrome but has been extended for firefox which is totally awesome.

The past 6 months I’ve been using it like a champ and may have passed my colleague. Well there was trouble with my latest upgrade to Fedora 21 and found out all my versions have some hardware incompatibilities on my laptop, which forced me to use Ubuntu… I know … It is actually quite nice … Anyways, I’m having to restore everything and well Firefox did not play nice even though I copied all my firefox files over and onetab was one of them.

I looked and looked and didn’t see any documentation on what to do for onetab recovery should your system bomb. Well I figured it out after some searching/troubleshooting.

step 1 navigate to


step 2 copy database to the same location as ^^^ on new machine


step 3 enjoy!

Docker and my initial thoughts

This past week I had an opportunity to tinker a bit with Docker and I think it is really cool… But I’m not sure I think it is quite the silver bullet that everyone makes it out to be.

Will Docker stream line deployments across environments, yes.

Will Docker ensure consistent configurations for the server environment, yes.

Will Docker replace Maven, no.

Will Docker replace continuous build servers, no.

Will Docker be used by the developers or administrators, unclear. If a developer is working in a somewhat loose data center, then the developer will need to document/script out the deployment steps so it can be replicated in each environment, which can be fraught with holes. Generally the developer will script out the deployments with respect to the application sever and not necessarily the os. This presents an issue, because now we are expecting developers to have a more in-depth knowledge of linux to properly understand the linux container and then the application server container on top of it.

Let’s assume the developer has an in-depth knowledge of the os and is capable of building images: networks, services, configuring the application server, etc. As a system admin, I would have to wondering, what was enabled/disabled within the docker image, is it secure, is it configured correctly and am I responsible for reviewing it? And what about for each development team? With a workload like this it sounds like a new position, which would eat into any revenue savings by switching to docker.

If system administrator is responsible for packaging application the outlook doesn’t look much better. To create Docker images well is challenging and does take some time to learn. No doubt adding this many layers to a Docker image some tuning would need to be done and the tuning wouldn’t necessarily be repeatable across teams/projects.

There is definitely a place for Docker, but I’m not sure it belongs at this phase of the development process. Docker seems to me to be better suited for the linux container, vice the linux container and the JEE container.

Or maybe it makes sense for a small company/team that has a lot of rock stars on it.

I’ll post more as I work with!