Using Google Coral in MacOS 10.13.5

Google announced that Coral TPU is now support Mac OS!

Unfortunately, there are several limitations. The official TF lite distribution only supports Mac OS >10.14.

However, the .whl file is just .zip file. so, Download the .whl with proper Python version, unzip and copy them to dist-packages folder just works.

After install edgetpu runtime file, the dylib path modification is required.

install_name_tool -change /opt/local/lib/libusb-1.0.0.dylib /usr/local/lib/libusb-1.0.0.dylib /usr/local/lib/libedgetpu.1.dylib

Also, the latest libusb-1.0.0 should be installed. Below error means libusb is too old to support USB 3.0. Just remove old libusb and install it with homebrew.

Incompatible library version: libedgetpu requires version 3.0.0 or later, but libusb-1.0.0.dylib provides version 2.0.0

Than Works like charm!

It’s really beneficial to test the TFlite model with edge TPU in development system.

Finally Intel OpenVino supports NCS2 in MacOS

From Intel OpenVino Version 2019R3, NCS2 is fully supported in MacOS.

The Official page mentioned the minimum requirement is 10.4.4, but the Installation and Compilation completed flawlessly in 10.3.6.

As expected, slight fluctuations in probability were observed.

To run examples, it is recommended to install OpenCV independently, instead of OpenCV-Core-Vino automatically installed with OpenVino installer. Some dylib needs via install_name_tool like below.

sudo install_name_tool -change @rpath/libmkl_tiny_tbb.dylib /opt/intel/openvino/deployment_tools/inference_engine/external/mkltiny_mac/lib/libmkl_tiny_tbb.dylib /opt/intel/openvino_2019.3.376/deployment_tools/inference_engine/lib/intel64/libMKLDNNPlugin.dylib


Also, the c++ error might be raised in OpenCV 4.1 homebrew environment. (Due to AVCaptureDeviceauthorization issue in <10.14 Mojave). Upgrade OpenCV to 4.2.1 resolves the issue.


API version ………… 2.1

Build ……………… 32974

Description ……. API

[ INFO ] Parsing input parameters

[ INFO ] Reading input

2020-01-13 01:34:57.003 interactive_face_detection_demo[97791:3322633] +[AVCaptureDevice authorizationStatusForMediaType:]: unrecognized selector sent to class 0x7fff821896a0

[ERROR:0] global /localdisk/jenkins/workspace/OpenCV/OpenVINO/build/opencv/modules/videoio/src/cap.cpp (193) open VIDEOIO(AVFOUNDATION): raised unknown C++ exception!

[ ERROR ] Cannot open input file or camera: cam

Install Caffe with CUDA support in High Sierra

1. Download and install anaconda

2. Download protobuf 3.5.1 source and compile, install.

3. Install Dependencies with homebrew

brew install -vd snappy leveldb gflags glog szip lmdbhdf5 opencv
brew install boost@1.59 boost-python@1.59
brew link boost@1.59 --force
brew link boost-python@1.59 --force

4.git clone

5.cp Makefile.config.example Makefile.config
CUDA_DIR := /usr/local/cuda

CUDA_ARCH := -gencode arch=compute_30,code=sm_30 \
 -gencode arch=compute_35,code=sm_35 \
 -gencode arch=compute_50,code=sm_50 \
 -gencode arch=compute_52,code=sm_52 \
 -gencode arch=compute_60,code=sm_60 \
 -gencode arch=compute_61,code=sm_61 \
 -gencode arch=compute_61,code=compute_61

ANACONDA_HOME := $(HOME)/anaconda
 $(ANACONDA_HOME)/include/python2.7 \


6.change below in makefile
ifeq ($(OSX), 1)
 CXX := /usr/bin/clang++
ifeq ($(OSX), 1)
 CXX := /usr/bin/clang++ -std=c++11

7.export CAFFE_ROOT="$HOME/caffe"
8. make all -j4
9. make runtest

install_name_tool -change “@rpath/libhdf5_hl.10.dylib” “/Users/<UserName>/anaconda/lib/libhdf5_hl.10.dylib” .build_release/tools/caffe
install_name_tool -change “@rpath/libhdf5.10.dylib” “/Users/<UserName>/anaconda/lib/libhdf5.10.dylib” .build_release/tools/caffe
install_name_tool -change “@rpath/libcudnn.5.dylib” “/usr/local/cuda/libcudnn.dylib” .build_release/tools/caffe

install_name_tool -change “@rpath/libhdf5_hl.10.dylib” “/Users/<UserName>/anaconda/lib/libhdf5_hl.10.dylib” .build_release/test/test_all.testbin
install_name_tool -change “@rpath/libhdf5.10.dylib” “/Users/<UserName>/anaconda/lib/libhdf5.10.dylib” .build_release/test/test_all.testbin
install_name_tool -change “@rpath/libcudnn.5.dylib” “/usr/local/cuda/libcudnn.dylib” .build_release/test/test_all.testbin

11. make pycaffe

12. make pytest

install_name_tool -change “@rpath/libhdf5_hl.10.dylib” “/Users/<UserName>/anaconda/lib/libhdf5_hl.10.dylib” python/caffe/
install_name_tool -change “@rpath/libhdf5.10.dylib” “/Users/<UserName>/anaconda/lib/libhdf5.10.dylib” python/caffe/
install_name_tool -change “@rpath/libcudnn.5.dylib” “/usr/local/cuda/libcudnn.dylib” python/caffe/

13. to use Matcaffe,

modify makefile.config (the order is matter)!

MATLAB_DIR := /Applications/

LIBRARY_DIRS := $(PYTHON_LIB) /usr/local/lib $(MATLAB_DIR)/bin/maci64 /usr/lib

install_name_tool -change "@rpath/libhdf5_hl.10.dylib" "/Users/<UserName>/anaconda/lib/libhdf5_hl.10.dylib" matlab/+caffe/private/caffe_.mexmaci64
install_name_tool -change "@rpath/libhdf5.10.dylib" "/Users/<UserName>/anaconda/lib/libhdf5.10.dylib" matlab/+caffe/private/caffe_.mexmaci64

How to backup automatically SciNote deployed in local server

  1. Install drivesync in ubuntu. (
  2. Make Script as below.#! /usr/bin/env bash
    Cd /home/$username/Scinote
    docker exec scinote_db_production pg_dump -h localhost -p 5432 -U postgres -d scinote_production -c -b -f /home/db.bak
    docker cp scinote_db_production:/home/db.bak /home/$username/Documents/drive/Backup/SciNote/Files/
    docker cp scinote_web_production:/usr/src/app/public/system /home/$username/Documents/drive/Backup/SciNote/Files/cd /home/$username/Downloads/drivesync

    ruby drivesync.rb

  3. chmod +x
  4. Automate using Crontab.



Data recovery

If you are restoring data on another host, you need to copy the backup config/application.ymland production.envfiles to the corresponding location.

Restore postgresql database

  • Copy data to container:
    docker cp db.bak scinote_db_production:/home/db.bak
  • Enter the container of scinote_db_production and execute
    psql -h localhost -p 5432 -U postgres -d scinote_production -f /home/db.bak
    docker cp ~/Documents/sciweb-data-bak/system/. scinote_web_production:/usr/src/app/public/system/
    make docker-production

Recover data in the file system

  • Copy data to container:
    docker cp /your/bakup/path/ scinote_web_production:/usr/src/app/public/system

Update source file

  1. Rebuild the docker image: make docker-production
  2. Migrate data:
    • make cli-production
    • rails db:migrate
  3. Restart docker container
    docker-compose -f ./docker-compose.production.yml up -d

Tensorflow 1.12 in MacOS High Sierra with Cuda from Source

After numerous trial-and-error, Tensorflow 1.12 with Cuda is successfully compiled in MacOS High Sierra (in Mac Pro 2012).


Cuda toolkit 10.1, cuDNN 7.5 Python 3.6 XCode 8.3.2

0. Install bazel (for unknown reason 0.18.1 causes pip installer problem).

Download from

chmod +x

sudo ./

1. Make a working folder and clone Tensorflow code.

git clone

cd ./tensorflow

2. Checkout release 1.12

git checkout r1.12

3. Change the codes.

Remove all __align__(sizeof(T))  or __align__(8) from below files. (Be careful, not to make double space typo.)

(i.e. extern __shared__ __align__(sizeof(T)) unsigned char shared_memory[]; -> extern __shared__ unsigned char shared_memory[];)

  • tensorflow/core/kernels/
  • tensorflow/core/kernels/
  • tensorflow/core/kernels/

Change  linkopts = [“-lgomp”] to #linkopts = [“-lgomp”]  in below file.

  • tensorflow/third_party/gpus/cuda/BUILD.tpl

Change constexpr Variant() noexcept = default; to Variant() noexcept = default; in below file. (Remove constexpr).

  • tensorflow/core/framework/variant.h

4. Export environment variables.

export CUDA_HOME=/usr/local/cuda

export DYLD_LIBRARY_PATH=/usr/local/cuda/lib:/usr/local/cuda/extras/CUPTI/lib:/Developer/NVIDIA/CUDA-10.1/lib



export PATH=/Developer/NVIDIA/CUDA-10.1/bin${PATH:+:${PATH}}

5. Configuration.

cd tensorflow


specify python and python library path.

Google Cloud Platform support? -> n

Hadoop File System support? – > n

Amazon AWS Platform support? -> n

Apache Kafka Platform support? -> n

XLA JIT support? -> n

GDR support? -> n

VERBS support? -> n

OpenCL SYCL support? -> n

CUDA support? –> y

CUDA SDK version you want to use. -> 10.1

Then all default values.


6. Run Bazel. (It takes time).

bazel build –config=cuda –config=opt –action_env PATH –action_env LD_LIBRARY_PATH –action_env DYLD_LIBRARY_PATH //tensorflow/tools/pip_package:build_pip_package –verbose_failures –define=grpc_no_ares=true

7. Make whl installer file.

./bazel-bin/tensorflow/tools/pip_package/build_pip_package /tmp/tensorflow_pkg

8. Install using created whl. (You need numpy =>1.61).

pip install /tmp/tensorflow_pkg/tensorflow-1.12.1-cp36-cp36m-macosx_10_7_x86_64.whl

Merging anatomical regions


1. Download the program for extract annotation table to LUT file.


2. Extract the annotation file and save it as editable format.

  WFSAnnotToLUT annot_file_path output_file_path

For example,

  GWFSAnnotToLUT /iPsych/freesurfer/subjects/001/label/lh.aparc.annot  /iPsych/freesurfer/subjects/001/label/lh.annot.txt

*replace the subject directory and subject numberings.

Create new lookup table

1. Follow the guides below.