I’d like to describe options I often use in my Qt projects.

OBJECTS_DIR = build
DESTDIR = dist

I don’t use shadow build and this keeps all object files in build/ subfolder making it easier to browse through the root folder. With second option my executables end up in the dist/ subfolder.

VERSION = 0.1
DEFINES += APP_VERSION=\\\"$$VERSION\\\"

This defines version of the project and makes it available as string APP_VERSION in sources. And yes, that amount of backslashes is required for it to work as string. By the way I use it in the following piece of code I put into my main() functions in order for everything to work well:

Q_INIT_RESOURCE(resources);
qsrand(qHash(QTime::currentTime().toString("hmsz")));

QApplication app(argc, argv);
QTextCodec::setCodecForTr(QTextCodec::codecForName("UTF-8"));
QTextCodec::setCodecForCStrings(QTextCodec::codecForName("UTF-8"));

app.setApplicationName(QObject::tr("Application Name"));
app.setApplicationVersion(QObject::tr(APP_VERSION));
app.setOrganizationName(QObject::tr("Xilexio"));

I use the last 3 variables through qApp macro later in my applications, usually to display appropriate window title.

win32 {
    LIBS += -Llib/win32
}

unix {
    LIBS += -Llib/unix
}

Example how to use conditionals on win32 and unix variables (defined automatically on Windows and Unix systems respectively) in order to link with appropriate libraries. This works if you provide library binaries together with code and put them to different directory for each system. The same conditionals can be used together with “CONFIG+=variableName” put into Project > Build Settings > Build steps > qmake step > Additional arguments. I use them to add test builds that way.

CONFIG(debug, debug|release) {
    CONFIG += console
}

Another conditional, this time switched only for debug build. We need to add “console” to config in order for information from qDebug and others to appear. That switch takes care of it. For release conditional simply replace “debug” with “release” in the first argument.

INCLUDEPATH += lib/include

This adds a space-separated list of folders for global #includes. They are passed to g++ compiler with -I and are also used for automatic code completion.

LIBS += -lz

Arguments for linking with libraries passed to the compiler. Put your -L and -l arguments here. Note: the code above searches for libz.a, z.a, libz.dll and z.dll in case of g++.

TARGET = great-program

Name of your executable.

build
debug
dist
release
Makefile*
*.pro.user
ui_*
moc_*
qrc_*
*.rc
object_script.*

Not really an option, but this is my generic .gitignore file I use in my Qt projects. Those are all generated or machine-dependent files and shouldn’t be checked into a git repository.

I’ll describe the most common open-source software licenses for libraries or software you can find in the internet. If you’re ever planning to release your application to public, it’s important to know some basics about licenses, as costs of violating those are very high if you’d get sued. I’m also attaching my recommendations for usage of those licenses. Please not that I’m not a lawyer, I do not guarantee that statements in this post are correct.

Notable licenses

So, here are notable licenses, from the least to the most restrictive ones (for usage in commercial applications):

  • Public Domain – you can do whatever you want with the code or its binaries published on this license, there are no limitations and you don’t even need to mention authors. That’s a good license for educational materials like tutorials or code snippets.
  • MIT – no real limitations, you only need to include copy of its copyright notice with your software that states mostly about lack of warranty. This is a good license if you want to be credited for creating your software, but release it for free to everyone – both to commercial and open-source developers.
  • BSD – there are few versions of it, but its usage is the same as MIT license. Only that some versions treat about not using authors to promote derived products (3-clause BSD/new BSD) or about forcing to include notice about authors in advertisments (4-clause BSD/old BSD). Usage of this license is very similar to MIT license, only that it imposes few more restrictions.
  • LGPL (Lesser General Public License) – open-source license used mainly for libraries that allows you to link dynamically with its unmodified version without impact on licensing your software. In case of static linking, direct usage of library’s sources or modifications to the library, your software (or only the modifications made to the library) must be also released under LGPL (or optionally GPL) license. That’s called the copyleft virus – this license makes sure that the library will stay LGPL and nobody (except copyright holders) can make it closed-source in the future. Another restriction is that you must make code of this LGPL software available to everyone you distribute binaries to. This license is good for libraries that you want to be developed by open-source community, making sure that no company will take it over and starts releasing new proprietary versions of it (perhaps killing the original open-source development). This license has few downsides though – there are quite a lot of applications in which you can’t link libraries dynamically, for example in applications developed for iOS that go to Apple’s App Store.
  • GPL (General Public License) – open-source license with the copyleft virus in its full form. Linking to or using source of GPL software in any way forces your own software to be GPL too. Also, like in case of LGPL, you have to provide sources of GPL software when distributing binaries. This license is primarily meant for applications, but some libraries also use it. It’s good for open-source application projects that are meant to be developed by open-source community. In case of applications, GPL doesn’t do much harm for commercial usages, but using it for libraries is entirely different. It effectively shuts off most of commercial usages, as commercial developers will usually have to just use another non-GPL library with similar functionality or write they own one.

Note: there is a GPL linking exception clause that some libraries use to maintain their code open-source (GPL), but at the same time allow linking (static or dynamic) them to your software under license of your choice. This works like LGPL, but is less restrictive in terms of allowing any kind of linking.

GPL vs non-GPL

There are many arguments for and versus GPL and I think this preference is mostly connected with how close are you to writing commercial applications.

Taking into consideration that open-source developers won’t maintain their software forever and that money is the thing that opens opportunities for big and quality software projects, I suggest going with MIT license if you want your small/medium (non-commercial) library project ever to be used. Or GPL with linking exception if you have open source community that would like to develop it.

As for (non-commercial) applications, MIT is always a good option. Choosing GPL here could be a good idea if you really have open-source community that would be interested in developing it and you want to keep it open-source.

Mixed GPL/LGPL/commercial

There are many libraries out there that offer you a choice:

  • GPL or commercial – some companies offer their libraries either on GPL license (for open-source projects) or on commercial license. I guess that’s a good way to make your software known better (in order to sell commercial versions later) and it should work for small or medium libraries. Example: Ext JS.
  • GPL or LGPL or commercial – there are libraries out there where you can choose either of those 3 licenses. However offering LGPL means that other companies can use that library for free. This approach could be successful for large projects that can get to be widely used (thanks to LGPL + high quality content) and could also bring money from support, warranties and additional features in commercial licenses. Example: Qt.

“Bypassing” GPL licenses

First LGPL. Don’t try to effectively include LGPL library in your executable (as if statically) by just linking dynamically and then putting it in resources or something. License doesn’t really treat about “-shared” gcc option, but describes that LGPL is effectively GPL with exception of using very small parts of the code (such as headers) to communicate with the library.

Second is GPL (or LGPL). All software derived from another GPL software must be GPL. This won’t change, but that doesn’t mean that a software that’s just using GPL software not as its core can’t be non-GPL or even commercial. This is all about the definition of derived work here. So if you’ll use external GPL application to do some operation for your application – say compile (using GPL gcc) or unzip (LGPL using 7zip), you should be fine, especially if this is just an optional part of your application or you allow user to configure your software to use other replacements. For example Qt Creator does just that by letting you select your compilers (like gcc) in its IDE. You have to make sure your work isn’t effectively derived as in case of, say, writing an executable wrapping all GPL library functions into command-line and then using resulting application through some weird protocol just as you would use the original library. Remember that it’s always treading on thin ice, since this is all about “how much” derived your work is.

Qt comes with QtTest library, which lets you write your own test suites for internal and GUI components. However, its usage together with your application in an easy to maintain way is not so obvious. In this tutorial I’ll focus more on framework and ease of usage and less on how to write contents of tests themselves. There is already a tutorial on how to write QtTest unit tests. I’m using Qt 4.8 for this tutorial, but those methods and concepts can be used in Qt 5 too.

Some code to test

I created an example project, which is a simple math expressions parser with GUI. I’m sharing it under Public Domain license on bitbucket, so use is as you please. There are three non-GUI classes there:

  • Token – class representing a single entity in math expressions – a number or an operator (+, -, *, /),
  • MathTokenier – class used to split a string into tokens while validating input,
  • MathParser – class used to parse and compute expression consisting of sequence of tokens.

Writing test suites

Having some decent chunk of code, we can now move on to testing it. Tests are grouped into test suites. Test suite is represented by a normal QObject class with test functions being her slots. There’s a special naming convention used here:

  • Tests have “test” prefix of their method name,
  • initTestCase() and cleanupTestCase() are methods called before and after execution of the whole test suite,
  • init() and cleanup() are methods called before and after execution of each test in the suite.

Structure is as below:

#include <QtTest>

class TestSomething : public QObject {
    Q_OBJECT

private slots:
    // functions executed by QtTest before and after test suite
    void initTestCase();
    void cleanupTestCase();

    // functions executed by QtTest before and after each test
    void init();
    void cleanup();

    // test functions - all functions prefixed with "test" will be ran as tests
    void testSomething();
};

Any function not conforming to those name conventions will be just ignored and not ran as test. So we can safely create helper functions. I’d like to note that this whole mechanism work well thanks to QObject’s meta properties which lets you list and access all your slots dynamically (and much more).

Lets create such a test suite in test/ folder (or some other location). It’s a good idea to write some stubs at the beginning to see if everything works well. So lets do this:

void TestTriangleIntegral::testStubPass() {
    QVERIFY(true);
}

void TestTriangleIntegral::testStubFail() {
    QVERIFY(false);
}

After setting everything up well, two messages should pop out after running those tests:

PASS   : TestSomething::testStubPass()
FAIL!  : TestSomething::testStubFail() 'false' returned FALSE.

Running test suites

QtTest library is used by making a separate executable that runs your tests. If you’d do this by creating another project, you’d have to maintain two lists of source files, libraries, flags etc. I’ll describe the approach I’ve taken to avoid that.

The default way is to use QTEST_MAIN macro, but that lets you use just one test suite per executable. Lets use another approach by creating a main() function for tests in test/main.cpp in the following fashion:

#include <QtTest>
#include "testsuite1.h"
#include "testsuite2.h"

int main(int argc, char** argv) {
    QApplication app(argc, argv);
    QTextCodec::setCodecForTr(QTextCodec::codecForName("UTF-8"));
    QTextCodec::setCodecForCStrings(QTextCodec::codecForName("UTF-8"));

    TestSuite1 testSuite1;
    TestSuite2 testSuite2;
    // multiple test suites can be ran like this
    return QTest::qExec(&testSuite1, argc, argv) |
            QTest::qExec(&testSuite2, argc, argv);
}

Note usage of “|” instead of “||”. This is in order to run all tests, even if one of test suites failed.

That’d be enough to run a test suite if it was a standalone application. But since we don’t want to maintain two applications, we’ll share the build between them.

Lets create a new Build Configuration to produce unit tests executable while also including to the build all code used in original application. Go to Projects > Build Settings and duplicate your debug configuration naming the new one “Test”. Then go to Build Steps, select qmake step and add additional argument “CONFIG+=test”. “Additional arguments” is a place in which you can define some additional parameters for your build. We’ll be using this together with conditionals in Qt project file (.pro) to create test build.
qt-config-test
Note: this step has to be repeated for each checkout of your project, as those settings are stored in “.pro.user” file (which shouldn’t be checked in as it contains machine-specific build information).

QtTest application differs from our application only by using different main function and having additional test suite classes. If you didn’t put much code into your main function (which should have been the case), you should be fine with just removing it from test build. You can use the “test” configuration argument defined earlier to incorporate that. Lets create the test build by:

  • adding QtTest library,
  • renaming our target executable to one with test name (optional),
  • removing original main.cpp file (in order to avoid conflicts),
  • adding sources of our test suites.

Assuming main.cpp contains your original main function, test/main.cpp contains main function for tests and test/testSuite* contain test suites, you can use following code:

test {
    message(Test build)
    QT += testlib
    TARGET = UnitTests

    SOURCES -= main.cpp

    HEADERS += test/testSuite1.h \
        test/testSuite2.h

    SOURCES += test/main.cpp \
        test/testSuite1.cpp \
        test/testSuite2.cpp
} else {
    message(Normal build)
}

In case of normal build it only displays message. But in case of test (CONFIG+=test) build, it also does changes mentioned above.

Writing test methods

Content of those tests is just like normal test cases – initialize some data, run tested code and check if everything is okay. I won’t go into details – QtTest tutorial elaborates enough on them. Just remember to browse through available macros and use QVERIFY/QVERIFY2 and QCOMPARE instead of Q_ASSERTs. See test/testMathTokenizer.cpp in my example project for examples of test methods themselves. If you want to write more exhaustive tests, have a look at Qt’s data-driven tests. There are also GUI tests, but those work well mainly on widgets, as your main window usually has private access to UI elements.

This setup allows for development in which you often write tests and run them, but is not quite as convenient as to hit some key combination to run your tests and display red/green. Also switching between builds (Debug/Test) is required if you want to alternate between running tests and application. So test-driven development is possible, although requires more work than in other libraries/languages.

I chose to use VTK 6.0.0 library in my project in order to visualize solution of system of partial differential equations in my Qt 4 application. By the way, I’m solving PDEs using getfem++.

I had to compile VTK first though. It wasn’t quite as complex as compiling getfem++, but still required some tweaking for my non-CMake based project. You can find compiled VTK binaries in my vtk bitbucket repository.

I was building this library on Windows 7 with GNUStep’s mingw32 with g++ 4.6.1.

VTK library build is CMake based. The good news is that the library build itself works well. The bad news is that using it in your non-CMake aplication requires some care. To build VTK first install CMake if you haven’t already done so and make sure it’s in your PATH.
The build I made was with additional Qt and Views modules, so I also had to include Qt libraries in PATH:

export PATH="/C/Qt/4.8.4/bin:/C/Program Files (x86)/CMake 2.8/bin:$PATH"

After unzipping the code I ran CMake from command line including my build options and using MSYS makefiles (normal ones used in GNUStep’s makes):

cmake -D BUILD_EXAMPLES=1 -D BUILD_SHARED_LIBS=1 -D CMAKE_BUILD_TYPE=Release -D CMAKE_INSTALL_PREFIX=/home/xilexio/vtk -D VTK_Group_Qt=1 -G "MSYS Makefiles" .
make
make install

That created VTK release library. Note that VTK release library uses Qt Release libraries. For debug VTK library, I changed CMAKE_BUILD_TYPE to Debug.

The problems with VTK arrive with its usage in non-CMake. I use Qt Creator and its project file (.pro) for building my application. There are two things to consider here. First – building and running with appropriate VTK libraries. For VTK’s Qt module to work, you have to use release VTK with release Qt and debug VTK with debug Qt. Sounds easy, but requires configuration in two places:

  • You have to include a condition in your project file to link with appropriate VTK version (here we additionally only set it for Windows):
    win32 {
        CONFIG(debug, debug|release) {
            LIBS += -Llib/vtk/release
        } else {
            LIBS += -Llib/vtk/debug
        }
    }
  • You have to run the application with appropriate VTK libraries in PATH. In Qt Creator go to Projects > Run and create two Run Configurations – debug and release. In Run Environment set the PATH to contain only one version of VTK libraries (debug or release). Another option could be renaming all VTK libraries, for example by appending “d” to file name, but I haven’t tested that.

If you’ll forget about those steps, you’ll get following error:

QWidget: Must construct a QApplication before a QPaintDevice

The cause for this error is usually using two different Qt libraries (in our case both debug and release). So, make sure to attach proper set of Qt and VTK libraries when deploying the application.

Second thing to remember is supplying appropriate defines to VTK headers. As stated here, if you’re not using CMake build, you have to set those defines yourself. You must define two variables before including any VTK header. I tried adding defines to my project file:

DEFINES += vtkRenderingVolume_AUTOINIT=...

but in the end I wasn’t able to get a good amount of backslashes and quotes to make it work. So I had to go with a precompiled header:

PRECOMPILED_HEADER = pch.h

and paste default options in it:

#ifndef PCH_H
#define PCH_H
#define vtkRenderingCore_AUTOINIT 4(vtkInteractionStyle,vtkRenderingFreeType,vtkRenderingFreeTypeOpenGL,vtkRenderingOpenGL)
#define vtkRenderingVolume_AUTOINIT 1(vtkRenderingVolumeOpenGL)
#endif // PCH_H

If you’ll forget about this step, program will crash once you actually try to use the library (in my case after clicking QVTKWidget window) giving the following error:

Generic Warning: In c:/GNUstep/msys/1.0/home/Xilexio/vtksrc2/VTK6.0.0/Rendering/Core/vtkRenderWindow.cxx, line 35
Error: no override found for 'vtkRenderWindow'.

With all that configuration VTK becomes usable in Qt project. However, there’s one thing to note if you’re deploying your application. For some reason a hardcoded directory path VTK_MATERIALS_DIRS generated during installation exists in vtkRenderingOpenGLConfigure.h. Apparently it’s used to locate some files with vtkXMLShader::LocateFile (and maybe in other places too). My guess is that as long as you don’t use magical functions like “find my file without me specifying root directory”, it should work on other machines too.

I wanted to find a numerical library that could solve a system of partial differential equations (PDEs) and would support automatic creation of a mesh. My other requirement was to be able to create my application under my favorite MIT license, so GPL libraries were out of question. There are many FEM libraries, however most of them are either GPL, have GPL dependencies (like UMFPACK in latest releases), don’t have automatic mesh creation or are not mature enough. After searching for a while I decided on getfem++, which looked like a promising LGPL library that would suit my needs.

getfem++ is a finite element library, which might be useful if you want to numerically solve a system of PDEs. It uses LAPACK and BLAS libraries for matrix/vector operations, gmm++ library for solving linear systems defined by sparse matrices, optionally Qhull library for automatic mesh creation and optionally muparser library for parsing mathematical expressions (like function determining Dirichlet boundary condition). Licenses were okay – BSD/MIT/BSD-ish for LAPACK, BLAS, Qhull and muparser and LGPL for getfem++ and gmm++.

Everything was great until I actually tried to compile this set on MinGW and make a shared library out of it. Results of my work can be found in my getfem++ bitbucket repository. If you’re not interested in compilation and just want to use this library with MinGW, you can stop reading here.

Let me note that I was doing the compilation on Windows 7 with GNUStep’s mingw32 with g++ 4.6.1.

First dependencies. I downloaded precompiled win32 LAPACK 3.4.1 and BLAS dynamic libraries for mingw32 from here. I’d like to mention that LAPACK and BLAS libraries need two basic libraries to run – libgfortran and libquadmath.

Next one was muparser 2.2.3. While it did build dynamically, for some reason compiled dynamic library didn’t work with getfem++. So I decided to go with custom static muparser library build (it’s on MIT license, so it’s okay). At this point of time I’m not even sure if customizing the build was required, but it worked.
After unzipping sources I built a dynamic library:

cd build
mingw32-make -f makefile.mingw SHARED=1

That made files being built with appropriate flags for dynamic library. Then I created the static library myself from produced files:

cd build/obj/gcc_shared_rel
ar cru libmuparser.a muParser*.o
ranlib libmuparser.a

Last dependency is Qhull 2012.1. I didn’t build the library, I just copied qhull.dll from bin/ folder from its package. I took headers them from src/libqhull/*.h. The problem was that getfem++ required qhull.h header, but it was named libqhull.h in Qhull. So I renamed it to qhull.h and changed #include’s in other headers to look for qhull.h. I also tried building Qhull static library, but for some reason it didn’t work well with getfem++.

Last task was to build getfem++ 4.2 library. It required quite a bit of tweaking in exports and experimenting with builds of its dependencies to make it work. Exports I used in my msys console (put to .profile) were:

export CPPFLAGS="-L/home/Xilexio/lib -I/home/Xilexio/include"
export CXXFLAGS="$CPPFLAGS"
LDFLAGS="-L/home/Xilexio/lib"
export LD_LIBRARY_PATH="/home/Xilexio/lib:$LD_LIBRARY_PATH"
export LIBRARY_PATH="C:/GNUstep/msys/1.0/home/Xilexio/lib;$LIBRARY_PATH"

Where /home/Xilexio/lib contained all previously built libraries and /home/Xilexio/include contained all headers (note: Qhull headers were supposed to be in qhull subdirectory). Those exports allowed both proper compilation and finding libraries by executables. Later I compiled the getfem++ itself with:

./configure --enable-qhull --enable-muparser --enable-shared --prefix="/home/xilexio/getfem" && make && make install

That created static getfem++ library. Next, I created a shared version of the library. I did it by simply taking libgetfem.a static library from install directory, extracting all its contents (.o files) and combining them back together, this time into a dynamic library:

ar x libgetfem.a
g++ -shared -fPIC -Wl,--enable-auto-import -Wl,-no-undefined -Wl,--enable-runtime-pseudo-reloc -Wl,--out-implib,libgetfem_dll.a -o getfem.dll $CPPFLAGS *.o -lblas -llapack -lqhull -lmuparser

Done. This library worked for me.

But still, I wanted to check if getfem++ tests were passing. Because they were compiling test applications directly with src/.libs/libgetfem.a static library, I had to trick this build process by copying getfem.dll to src/.libs/libgetfem.a. In the end g++ somehow understood that it was a dynamic library and linked properly with it. Application didn’t work without getfem.dll in its location, so I guess dynamic linking was done properly. That change made tests build, but one of tests failed – test_large_sliding_contact.cc. It worked after removing one of asserts:


GMM_ASSERT1(err < 4e-6, "Erroneous gradient of normal vector "); I think something might have gone wrong with computations in this test, because I didn't compile this library with QD (high precision floats library). By the way, I couldn't find a version of QD that would work on Windows. Warning: getfem++'s user documentation states that we should have built a static library, so this is not a normal build. There is also no problem in using static getfem++ library version with non-GPL programs, because of getfem++ uses GCC Runtime Library Exception.

Here you can find sources of a simple Qt 4.8 application that takes XML input and can do the following:

  • check its validity according to given XML Schema
  • make an XSLT transformation on it
  • check validity of XSLT transformation output according to given XML Schema

The purpose of this simple application was to test how much of XML Schema and XSLT functionalities were implemented in Qt 4.8 and to help in debugging XML Schema and XSLT taking into account how much of it is supported. The result was that XML Schema 1.0 is supported pretty well, but XSLT isn’t implemented well enough for production neither in XSLT 1.0 nor XSLT 2.0 standard.

As stated here, Qt 4.8 implements XML Schema 1.0. One can read here that Qt 4.8 has only experimental support of XSLT 2.0. There’s also a list of unsupported features over there. Here are some things I found out:

  • for-each-group doesn’t really work.
  • for-each doesn’t work on a sequence of atomic types. For example, you can’t do that:
    <xsl:for-each select="distinct-values(/lib:record/lib:name)">

    The trick I used to bypass it in this case was simply by getting those values as nodes:

    <xsl:for-each select="for $n in distinct-values(/lib:record/lib:name) return /lib:record[lib:name=$n][1]/lib:name">
  • Sequence of nodes can’t be passed in xsl:call-template parameters. To bypass that, I simply passed some atomic type that would identify the sequence and recreated it in the called template.
  • xsl:key doesn’t work (as stated in the documentation).

Generally, lack of support of many features can be bypassed without increasing the problem complexity by using weird tricks. As in example above, it’s always better to iterate over distinct-values (XSLT 2.0) and take them as nodes from the file instead of using dreadful square-time constructions like not(preceding-simbling::…) (which probably would be the case for XSLT 1.0 without xsl:key support). Though, I wouldn’t recommend using Qt 4.8 XSLT in real production unless the XSLT is really simple.

I had to make OpenCL computations with following configuration:

  • Windows 7 64bit
  • mingw32
  • NVidia GPU

The problem is that NVidia provides in their samples only Visual Studio (2008 and 2010) version of OpenCL libraries in their samples. So, the problem was getting a static .a or dynamic .dll 32bit library for mingw32 along with appropriate headers. Many ways I tried failed (most notable one was trying to use reimp tool as described here), so I solved the problem in a less clean manner. I simply copied all header files from NVidia’s samples and copied the dll file from one of NVidia’s folders on my PC. This made my code compile without problems and link with some warnings (presumably caused by the difference of function naming between VS and g++ compilers) similar to those:

Warning: resolving _clGetPlatformIDs@12 by linking to _clGetPlatformIDs
Warning: resolving _clGetPlatformInfo@20 by linking to _clGetPlatformInfo

It worked nevertheless thanks to automatic call correction. To supress those warnings, I used the suggested --enable-stdcall-fixup linker option (meaning -Xlinker --enable-stdcall-fixup for g++). I compiled everything with the following call:
g++ -o app -O3 -Ilib -Llib -Xlinker --enable-stdcall-fixup -lOpenCL app.cpp

where lib was the folder containing CL folder with headers and OpenCL.dll file.

Please note that I haven't really tested what kind of impact would this change have on OpenCL performance. I assume there should be no changes. Also, I only used few OpenCL 1.1 features in my project, so in some situations you might want to somehow check compatibility of header files and the dll file you used.

Here is a zip file containing the OpenCL.dll file with headers I used: OpenCL dll library with headers.

26. February 2013 · 1 comment · Categories: Uncategorized · Tags: ,

I had a problem – I had Chromium 21 installed on a PLD Linux system I was using. It was pretty unstable and crashed often, especially when using Pig Toolbox plugin with it. I decided to get my own version directly from Google, but there was a problem: I didn’t have root rights to install it. I will describe how I resolved this problem.

First, I downloaded a .deb x64 package from Google Chrome download site and extracted it with command:

dpkg -x google-chrome-stable_current_amd64.deb chrome

Extracted folder “chrome” had 3 folders: etc, opt and usr. Interesting stuff – binary files – are in opt/google/chrome. In a perfect world, we could just move opt/google/chrome anywhere and run Chrome with ./chrome command inside that directory. But, I got following error:

./chrome: error while loading shared libraries: libudev.so.0: cannot open shared object file: No such file or directory

I used pretty dirty way to resolve that problem. I just made a symlink libudev.so.0 in chrome’s directory pointing at libudev library that was already installed in the system:

ln -s /usr/lib64/libudev.so libudev.so.0

In your case, you’ll probably want to do a similar thing for any libraries chrome can’t find, but you have them installed in your system (of course everything might crash if library version isn’t supported by Chrome). You might have to download some missing libraries too and put a symlink to them in Chrome’s folder. After getting rid of library problems, I got following error:

[3880:3880:0226/160426:FATAL:zygote_host_impl_linux.cc(125)] The SUID sandbox helper binary is missing: /opt/google/chrome/chrome-sandbox Aborting now.

As I read on forums later, It turns out that Google Chrome’s sandbox is implemented in a pretty bad way – it even has hardcoded paths in it. Essentially, that means that standalone Chrome won’t  run with sandbox. Fortunately, one can resolve this by using –no-sandbox option:

./chrome --no-sandbox

Now it runs perfectly. To complete everything, I made a script for myself (below) to run Chrome and binded a key for it. I moved my Chrome to ~/bin/chrome, so modify the path as you wish. I recommend putting absolute path here, so that the script will run from anywhere.

#!/bin/bash
cd ~/bin/chrome
./chrome --no-sandbox --disk-cache-size=50000000 --allow-outdated-plugins

–disk-cache-size is an options that limits cache size in bytes (50MB above). Chrome can use quite a bit of cache – in my case over 0.5GB. I used –allow-outdated-plugins for old flash/java plugins installed on my system to work.

Warning: not using a sandbox is a security vunerability. Don’t use it if security is important in your environment.

I will describe three resources I frequently use when translating Japanese to English in this post.

I’ve been learning Japanese for over 3 years now. In my opinion, learning its grammar is quite easy and the hardest part is remembering all the kanjis and words. As I wanted to be able to translate sentences fairly quickly, I found tools allowing me to translate much more quickly, all being free online services. Even though I have only moderate Japanese skills, I’ve been able to translate fairly quickly few thousands lines with their help already. Here they are.

WWWJDIC

http://wwwjdic.com

Great Japanese-English dictionary (actually consisting of other dictionaries, mainly EDICT). Apart from the basic dictionary, it incorporates many other functionalities, connected to other web services. The ones I use the most are:

  • Dictionary lookup – not only in EDICT, but also in other dictionaries. Though, usually, if the word I’m looking for can’t be found in EDICT, there are no meaningful translation in other dictionaries too.
  • Kanji lookup – finding reading of kanjis.
  • Kanji stroke order – my favorite – in kanji lookup menu, there are links for stroke orders (the brush icons) for most kanjis. Those are taken from two external services. It’s a very handy feature if you’re not sure about the stroke order or direction of stroke for some lines. For example, see kanji stroke orders for 渚.

Furiganizer

http://www.furiganizer.com

Now that’s what made my translations at sane pace possible. It’s a tool adding furigana to kanjis. Instead of using tools from Microsoft Word or OpenOffice Writer, it’s usually faster and more accurate to use this one. Another very handy feature is displaying translations of words (taken from WWWJDIC) or at least separate kanjis straight away.

Nihongoresources

http://www.nihongoresources.com/dictionaries/onomatopoeia.html

The whole Nihongoresources site is a valuable tool, helpful in learning of Japanese, but the unique part about it is very large onomatopoeia and mimesis dictionary. Even WWWJDIC doesn’t have most of them. Onomatopoeias and mimesis are used in Japanese really often, so you can’t really start efficient translation of any longer text written in colloquial language without such a tool.

Extra

http://nihonshock.com/2010/04/12-japanese-ime-tips/

If you’re using Microsoft IME to type Japanese, it’s definitely a good idea to gain some knowledge about its usage, as it helps in faster typing. The site above is one of many with such an information. I like it for the list of symbols you can convert your text into, and their keywords (like ☆ from ほし).