How to change CUDA version
Perhaps cleaner: sudo update-alternatives –display cuda sudo update-alternatives –config cuda
Perhaps cleaner: sudo update-alternatives –display cuda sudo update-alternatives –config cuda
The learning rate looks a bit high. The curve decreases too fast for my taste and flattens out very soon. I would try 0.0005 or 0.0001 as a base learning rate if I wanted to get additional performance. You can quit after several epochs anyways if you see that this does not work. The question … Read more
What is the version of your Ubuntu install? Try this. In your Makefile.config try to append /usr/include/hdf5/serial/ to INCLUDE_DIRS: — INCLUDE_DIRS := $(PYTHON_INCLUDE) /usr/local/include +++ INCLUDE_DIRS := $(PYTHON_INCLUDE) /usr/local/include /usr/include/hdf5/serial/ and rename hdf5_hl and hdf5 to hdf5_serial_hl and hdf5_serial in the Makefile: — LIBRARIES += glog gflags protobuf boost_system boost_filesystem m hdf5_hl hdf5 +++ LIBRARIES … Read more
Why don’t you use the InfogainLoss layer to compensate for the imbalance in your training set? The Infogain loss is defined using a weight matrix H (in your case 2-by-2) The meaning of its entries are [cost of predicting 1 when gt is 0, cost of predicting 0 when gt is 0 cost of predicting … Read more
A quick guide to Caffe’s convert_imageset Build First thing you must do is build caffe and caffe’s tools (convert_imageset is one of these tools). After installing caffe and makeing it make sure you ran make tools as well. Verify that a binary file convert_imageset is created in $CAFFE_ROOT/build/tools. Prepare your data Images: put all images … Read more
Caffe net juggles two “streams” of numbers. The first is the data “stream”: images and labels pushed through the net. As these inputs progress through the net they are converted into high-level representation and eventually into class probabilities vectors (in classification tasks). The second “stream” holds the parameters of the different layers, the weights of … Read more
It is a common practice to decrease the learning rate (lr) as the optimization/learning process progresses. However, it is not clear how exactly the learning rate should be decreased as a function of the iteration number. If you use DIGITS as an interface to Caffe, you will be able to visually see how the different … Read more
In keras, non-trainable parameters (as shown in model.summary()) means the number of weights that are not updated during training with backpropagation. There are mainly two types of non-trainable weights: The ones that you have chosen to keep constant when training. This means that keras won’t update these weights during training at all. The ones that … Read more
Tensorflow has just released an official Object Detection API here, that can be used for instance with their various slim models. This API contains implementation of various Pipelines for Object Detection, including popular Faster RCNN, with their pre-trained models as well.
The pycaffe tests and this file are the main gateway to the python coding interface. First of all, you would like to choose whether to use Caffe with CPU or GPU. It is sufficient to call caffe.set_mode_cpu() or caffe.set_mode_gpu(), respectively. Net The main class that the pycaffe interface exposes is the Net. It has two … Read more