Recently I found it necessary to join two git repos together while still maintaining history & future ability to split or rejoin them. Enter the ‘git subtree‘ command.
Because this command is now merged into git-core since 1.7.11, we will need to install the latest git from a PPA. Note that as of ‘now‘, the latest available package from the PPA we will use is 1.8.0, and it currently installs the subtree script to /usr/share/doc/git/contrib/subtree. However, since the Makefile there expects asciidoc.conf to be in ../../Documentation/asciidoc.conf, we must checkout the source package & make from there.
I am using Ubuntu 12.04.1 LTS here.
Installing latest git-core + git-subtree
First add the PPA, update & upgrade. Then install git packages that are held back by apt. Also install asciidoc (optional if you want the manpage).
Next, install the source package & make git-subtree + manpage:
[ ! -e '~/src/git-core' ] && mkdir -p ~/src/git-core
cd ~/src/git-core && apt-get source git-core
cd ~/src/git-core/git-*/contrib/subtree/
[ -e '/usr/lib/git-core' ] && sed -i -e '/^libexecdir.*/ s|/libexec/|/lib/|' Makefile || echo '/usr/lib/git-core does not exist! Check that your libexec dir exists and reinstall git-subtree'
sudo make prefix=/usr && sudo make prefix=/usr install && sudo make prefix=/usr install-doc
This may not work for you if you’re not using Ubuntu (your prefix or libexec dir may be different). If in doubt, get git-core from upstream and build it from there, or install the script to $(git --exec-path)/git-subtree and chmod 755 it (see Makefile & INSTALL doc in contrib/subtree).
Now you should be able to use the ‘git subtree‘ command. For help, run ‘git help subtree‘ or ‘man git-subtree‘.
Some helpful examples of git subtree in use in the wild:
Update 2012-10-22: Currently the package for git 1.8.0 from the PPA does not include the git-prompt.sh script. If you are using the __git_ps1 function in your bash prompt, you’ll need to get the source package and find that file in /home/jcuzella/src/git-core/git-1.8.0/contrib/completion/git-prompt.sh. Install that file somewhere under your home directory and then add a line to your .bashrc file to source it. You’ll know if you need it because you’ll probably see this message after installing latest git:
Recently I found myself with shell access on a host without a git client installed, and also without the necessary build tools to compile it (gcc, make, etc…). However, I did have access to a machine with the same processor architecture (in fact, the same exact processor). If you mange to find yourself in this situation, what you need to do is compile git statically on the machine which does have gcc and make installed.
What’s static mean?
In the compilation process, the compiler usually must link in shared dynamic libraries. In Windows, these are called .dll files, and in linux they are usually .so files. These libraries are simply collections of compiled functions that can be reused for many different programs that require them to do a specific task. By sharing these libraries, the computer can save RAM and hard drive space by only requiring one copy of a specific library to be present for many programs that have been compiled for it.
In order to avoid unexpected behavior, a program must sometimes be compiled with a specific version of a dynamic library in mind. This isn’t always true, but in order to ensure portability and expected behavior it’s important. In linux, your package manager takes care of making sure these version dependencies are satisfied correctly. However, this can be a problem when you’re stuck on a machine for which you have no control over. You can’t know for sure what version of a specific library is installed, or when it will be upgraded. You could build your program on another machine with the same processor architecture, and with the same libraries and then just copy it over, but that leaves room for breakage down the line in case your target machine’s libraries are upgraded, or if any of the libraries on the target machine are compromised or replaced by malicious versions. Here’s where statically building comes in handy.
How to build git with static linking
This example assumes you already have access to a machine with build tools already installed. This build machine is also assumed to have the same processor architecture as your target machine. You can find the latest stable release of git at: http://git-scm.com
Here are the steps to take:
1) On your build machine, get the source code for git, unpack it, and go into the source directory:
$ wget http://kernel.org/pub/software/scm/git/git-1.7.2.2.tar.bz2
$ tar -jxvf git-1.7.2.1.tar.bz2
$ cd git-1.7.2.2
2) Configure git to install to a predetermined directory, with static linking. (Replace /home/myuser/git-static
with whatever path you want):
$ make
# Optional: make man pages and documentation
# Assumes you have asciidoc and other required programs on your build machine
$ make doc
# Install to your target directory
$ make install
4) Assuming all went well, now you can pack it up into a tarball for transfer to your target machine.
$ cd /home/myuser/git-static/
$ tar -cjvf git-static.tar.bz2 ./*
5) Copy it over to your target machine however you can, and unpack it to your home directory there with tar:
$ cd ~
$ tar -jxvf git-static.tar.bz2
# Check that you can use git.
# If you can't, make sure that your ~/bin directory exists in your environment's $PATH
$ git --version
I originally wrote this last year on February 12, 2013. A lot has changed since then, including getting a new job. Yet I am still coming back to the same themes and concepts in my new environment. I feel like I’ve gained enough experience now to post the final draft on this topic, and the existential dilemmas & problems that we face as Software or DevOps Engineers. Even if you’re an Engineer of any kind, you’ll probably relate to these high-level concepts and philosophies, because they really have to do with solving complex problems of any kind in an impermanent world of change.
One day, a co-worker asked me this question:
Q: In the general case: Is it better to simply fix the immediate problem, even if I foresee a possible future problem? And what do I do if solving the possible future problem creates more problems. I feel like this question is practically philosophical…
It took me a while to think & write up an answer for this, but afterwards I realized that I had some helpful tips to share.
A: Ah, yes… I am currently also trying to figure out the ideal solution to this philosophical problem
Getting Lost in Problem-Space
Ideally when we encounter a problem, we would want to solve it without creating new ones. However, in practice there are many cases where solving one problem either creates or reveals a new one (with some sort of cause/effect relationship). Usually we can happily go along our way fixing each problem as it arises, uncover a new one, and start fixing that. Hopefully this process leads us towards less problems and some feeling of completion of the task at hand where we can mark it as done. However, sometimes solving one problem causes a cascade of new problems to arise. Sometimes also we can get lost in the maze of problems and we lose sight of the forest for the trees.
Increase Your Awareness of the Forest
In this case, I have realized that without a full awareness of enough cause/effect possibilities that stem from our decision, it is easy to get lost in the labyrinthine field of “problem-space”. This can be very scary, discouraging & overwhelming, and give us many mixed feelings and beliefs about which way to go . Sometimes this awareness of a potentially infinite minefield of problems can instill the ‘fear of the unknown’ which drives me towards the decision to do nothing, ignore, or procrastinate, but the initial problem still is not solved. One helpful thing I’ve realized is that we all need rest sometimes, and a break can help to recharge & come back at the problem with full force, yet it’s important to do this consciously rather than self-sabotage or allow it to become procrastination. We can realize the futility of doing nothing, so rather than “hiding” and withdrawing from engaging in life for too long, I usually end up deciding to do something with as much information as I know in the moment. Sometimes I end up researching more to find some possible solutions or workarounds and choose the best one, sometimes I break the problem down into pieces and make a small step towards solving the first problem in the moment, and sometimes I ask for help or try to find some expert in the field who can either point me in the right direction or even fix the problem for me. Other times, I’m tempted with implementing a temporary or stop-gap solution. Still other times, I forget my own advice altogether and find myself stuck in a frustrated mood. In these times, it helps to have a friend and coworker to bring my awareness to this state, which helps signal me to take a break, or to pursue another avenue to help me to get unstuck.
Ideal Solutions vs. Quick Hacks & Band-aids
With solutions, generally there is some sort of feeling of confidence as to whether the solution is a quick hack, or a more complete one. It’s good to trust one’s intuition in this case, but sometimes time prevents us from completing the ideal solution too. Sometimes I end up searching and debugging down so many levels, running into so many problems and dead ends and poring over so much information that it becomes overwhelming. (Some have come to refer to this as yak shaving). Over time, we would hope that we could somehow avoid dead ends and quick hacks, and there is some truth to the saying “a stitch in time saves nine”. I tend to prefer seeking the more ideal or elegant solutions in general, however, sometimes we all need some quick hack to get things working. In practice, sometimes we find that due to a deadline, some obstacle, or some other reason, we must implement some sort of quick workaround with hopes to come back and fix it later. The danger in this is that we haven’t solved the real problem, and eventually it can come back around and bite us again (and when enough users run into it, or enough people in the community are blocked by this issue, then a good general solution is important for all!). So again we have the spectrum of beliefs that could either lead us towards a quick hack or towards a more dependable and future-proof solution (although potentially more time consuming). In each case, I’ve come to realize that in the end, it all does come down to a quick decision in the moment of which path to take. We can realize that the journey of getting there was all part of the solution, but staying in the moment in each moment feels important to me too. In realizing the power of Now, we can regain our balance and choose our path. The context of your present moment contains all the information you need to make the right decision.
Problem Solving Balance
It’s good to stop here and realize that there is a limit to the amount of knowledge and information searching that the human mind & ego can handle before getting tired and simply wanting to settle with any solution, whether or not it’s the best one. Taking a step away from the problem, “rubber ducking” (ideally with a real person… “getting fresh eyes on the situation”), code reviews, or asking for help can provide what’s needed in the moment to become unstuck. Hopefully these tools will lead us towards a cascade of solutions and the happy sense of completion I spoke of above . It’s always a balance, and I’m always learning too.
Some feel that it’s best to avoid getting carried away in a “yak shaving” party, and then use this belief to justify avoiding all potential rabbit holes. However, the trap of avoiding all “yak shaving” hurts us because not all yaks are unnecessary problems to solve (seedefinition 1 here). If I encounter the same problem many times, it becomes a big enough issue that I feel the need to address it fully as its own problem. In this case, it helps to do a bit more research and map out the problem’s “environment” and the “problem space” a bit in order to find a quicker solution to the problem. Usually I can solve something and move on without encountering too many “dead-ends”. However, sometimes I also find myself in a labyrinth of dead ends and getting frustrated with myself. In this case, usually the problem is that I’ve made myself “snowblind” to the real cause of the problem in the first place. Again, the methods of breaking it down, stepping back, getting help, or fresh eyes on the issue can help to get unstuck. Finally, In the software world there is also another very promising technique to prevent us from going down too many dead ends while coding: Unit Testing!
Map & Record Your Problem-Space
The easiest way to solve a maze is to put your arm out and follow one wall. You might end up going down a bunch of dead ends, but usually you won’t hit all of them, and you will always find your way out. The easiest way to avoid dead ends is to learn from your past experience and remember which way to go. But what happens when the maze is too big for us to memorize or we don’t remember the way out? Well, you’d probably want to create a map. In software, there is one good way to do this to avoid bug regressions: Automated Testing. The idea behind TDD is to write a test that will verify your code performs the function it should first, then write the actual functional code to get the test to pass. Over time, this idea allows us to create a library of Automated Unit Tests that verify our code works. It also protects us from old bugs recurring, or from introducing a certain set of new bugs as long as our tests are well designed. This method can help improve and streamline our coding cycle by immediately letting us know whether we’ve gone down an old dead end we already knew about, or if we have created a new problem. Essentially we are painting ourselves into the ‘happy path‘ which we will eventually converge upon.
The Philosophical & Existential Despair of Desire Alignment & Problem Solving
Because we are human, we do have desires. As engineers, we tend to be driven to solve problems & have a desire to do so. However, sometimes we don’t truly desire to solve all the problems we are faced with. Sometimes problems are too simple or boring to us, or sometimes we are faced with a multitude of small problems and issues which drag us away from our original problem. Sometimes we just feel so frustrated not knowing “Why things just won’t stay fixed?“. The answer to this question is that this is a world of forms which are constantly shifting and changing. You are not the same person you were 5 years ago, last year, yesterday, and even a moment ago. Software is constantly updating and changing, the applications and operating systems we work on are being developed and improved. Due to the complex dependencies and interconnectedness of these pieces of software, sometimes things end up in a (very) broken state. It’s really a multidimensional shifting puzzle that is constantly evolving over time. Think of some kind of pandimensional hyper-rubiks cube of entangled dependencies. Sometimes problems are too hard due to the number of simple yet interrelated problems. Here’s where workarounds and simplifying things can really help. Sometimes we may decide to give up, or find another way around the problem. Perhaps we may just decide to cut through the gordian knot, and avoid solving the difficult problem altogether. As engineers, and humans with an ego identity, we can tend to see these possibilities as unskillful, or perhaps undesirable. It may feel like giving up, however there is great wisdom in this route. It’s a perfectly valid choice to simplify a problem to the point of neutralizing it altogether. The real hard part here is our own internal struggle with our desires. Alan Watts as always has some wisdom on this topic:
Sometimes I find myself so deep in thought about something that my mind feels stuck in overload of the infinite subtleties of navigating life… yet surprisingly enough I find my way through even the most difficult of circumstances… and often enough I am able to see my thoughts in their “external” parallel thought form expression, as if looking in the mirror. This highlights the imperfections in myself which I fondly look at and love for what they are, in their own time and place. Then I remember that the axiom “as above, so below” holds as well… and it gives me great comfort in knowing that everything is alright, and I’m meant to be doing whatever it is that I’m doing, whether I see it as perfect or not.
“When you make a decision… (people have a great deal of anxiety about making decisions)… So when we decide, we’re always worrying “Did I think this over long enough, did I take enough data into consideration. And if you think it through, you’ll find that you never could take enough data into consideration. The data for a decision in any given situation is infinite. So what you do is you go through the motions of thinking out what you will do about this, and when the time comes to act you make a snap judgement. But we fortunately forget the variables that interfere with this coming outright, it’s amazing how often it works. But worriers are people who think of all the variables beyond their control, and what might happen. So then when you make a decision, and it works out alright, I think very little of it has much to do with your conscious intent and control.” – Alan Watts
Conclusion
At the end of the day, we all usually try our best to come up with good solutions, no matter how difficult or daunting this may be. There is a balance to be found between seeking ideal best-case solutions and implementing quick and usually temporary kludges (keep in mind there are elegant hacks too!). Also, it’s important to note that perfection is an illusion, because perfection is highly subjective (things can always be improved, or be worse). It’s ok to settle for “good enough”, as long as you take an attitude towards continuous improvement. An positive attitude of accepting that mistakes can and will be made (and that’s ok!) with the habit of learning from mistakes creates a direction of evolution towards continuous improvement, while a self-defeating attitude of overwhelm with things being imperfect can lead to an attitude of giving up. Sometimes it is very overwhelming to get lost in a labyrinth of problems. Therefore it can be quite helpful to learn some mental philosophical kung-fu and other techniques which we can use to regain our balance. This is possible without going too far into reviewing the entire spectrum of Agile Software Development philosophy, or too far into software-specific patterns or techniques. The most helpful techniques for Software newcomers are those that can help us feel less overwhelmed and help to re-frame the situation such as: Research, Breaking the Problem Down, Getting Fresh Eyes, Asking for Help, Seeking an Expert, Mapping your Problem Space, Workarounds, Simplification, and Cutting the Gordian Knot. The most helpful technique is to realize the power of choice in every now moment, and mastery is knowing what technique each moment calls for. Over time, we learn lots of techniques and eventually become a master or expert in our field. Every teacher was once a student, and the best teachers are those that learn from their students. I’m always learning more and working on improving too 😉 Happy coding!
Recently, while trying to install the cr-gpg Google Chrome Extension (blog post to come), I ran into a small problem trying to import the .crx file. This led me to find out what the .crx file type is, and how to extract it. As most of the posts on this blog are for rather advanced linux users, I’m going to try and make this more general and helpful for the general public.
TL;DR
The short answer: unzip!
It’s a .zip file with an extra header containing the author’s public key and GPG signature. You may want to strip it if you’re repackaging it.
Did you find a CRX file in your web browser’s “Downloads” folder on your computer and wonder what program should open it? Maybe you’ve been trying to install an unofficial Google Chrome Extension and got the message “Apps, extensions, and user scripts cannot be added from this website” when opening the .crx file.
Answer
A file with the CRX file extension is a Google Chrome Browser Extension Archive file.
According to a quick google search: CRX files might also be “Links Games Course” files (Although it comes up as a top result in Google, I couldn’t find any other info on this… I’m unsure if these are even a valid file type, or just google search spam?).
How To Open a CRX File
As you probably know, the easiest way to open any file is to double-click it and let your PC decide which default application should open the file. If no program opens the CRX file then you probably don’t have an application (ie: Google Chrome) installed that can view and/or edit CRX files.
Warning: If you are on Windows, beware when opening executable file formats received via email or downloaded from websites which you are not familiar with. See this List of Executable File Extensions for file extensions to avoid and why. (If you are on linux, be smart, be secure, and be happy! ^_^)
The CRX file type is primarily associated with the ‘Google Chrome’ web browser by Google. Any file with extension CRX is likely a plugin file or as it is more commonly known: a Google Chrome Extension file. These files are used to package a Google Chrome extension, and can be installed in Google Chrome to add extra features to the browser.
The Google Chrome browser uses CRX files to provide extend-ability in the browser program. A Google Chrome CRX file is really just a renamed ZIP file with an extra bunch of bytes in the header to verify the plugin’s origin (and validate the private key’s signature using the public key). This is all done for security purposes, as we would not want to run or install any browser plugin from a source that we do not trust.
In theory, any archive/compression program, like 7-Zip, TUGZip, unzip, MacZip (all free) ORWinzip/WinRAR (non-free), will open CRX files for extraction (expansion/decompression). CRX files may also be opened using any other archive tools you may be familiar with to view the contents of the packaged plugin/extension. However, depending on whether your tool can ignore the file header correctly, you may need to strip it first to convert to a zip, or use another tool.
As of this writing, there is no way to open the CRX file in its default program (Google Chrome) and choose to save the open file as another file format. However, you may want to try 7-zip to extract it, modify to your liking, and then repackage it as a .ZIP.
There is one basic way to attempt to convert a CRX file to a ZIP file: strip the extra header!
Important: You cannot usually change a file extension (like the CRX file extension) to one that your computer recognizes and expect the newly renamed file to be usable. An actual file format conversion using one of the methods described above must take place in most cases.
To do this job, we’re going to have to resort to some basic unix commands: dd and tail
To strip the header, you’ll need to know how long it is first. Luckily, InfoZIP‘s unzip utility can tell us how long it is (tested on Ubuntu and CentOS with unzip versions 6.00 and 5.52 respectively):
# Get unzip if we don't have it
[ -z "$(which unzip)" -a -n "$(which apt-get)" ] && sudo apt-get -y install unzip
[ -z "$(which unzip)" -a -n "$(which yum)" ] && sudo yum -y install unzip
unzip -l file.crx
I decided to write a simple web spider in order to learn Python, and to generate a list of urls for webserver benchmarking & stress testing… and so Spyder was born. It is written in Python 3.
When called on a url, it will spider the pages and any links found up to the depth specified.
After it's done, it will print a list of resources that it found.
Currently, the resources it tries to find are:
images - any images found on the page (ie: <img src="THIS"/>)
styles - any external stylesheets found on the page. CSS included via '@import' is currently only supported if within a style tag!
(ie: <link rel="stylesheet" src="THIS"/> OR <style>@import url('THIS');</style> )
scripts - any external scripts found in the page (ie: <script src="THIS"> )
links - any urls found on the page. 'Fragments' are discarded. (ie: <a href="THIS#this-is-a-fragment"> )
emails - any email addresses found on the page (ie: <a href="mailto:THIS"> )
An example script for doing something like this, 'www-benchmark.py', is included. It uses apache benchmark as an example.
Eventually I'll be experimenting with 'siege' for benchmarking & server stress-testing.
NOTE: Currently the spider can throw exceptions in certain cases (mainly character encoding stuff, but there are probably other bugs too)
Getting *working* character encoding detection is a goal, and is sorta-working... ish? Help in this area would be appreciated!
Filtering the results by domain is almost working too
I finally got my home development server completely updated, including a freshly compiled Gentoo hardened kernel! Now that I’ve got my server setup and working smoothly again, I started looking into the IDE side of the equation so I could do PHP web development on my laptop.
So after looking around a bit, I stumbled upon the idea of using Eclipse to do PHP development. In the past I have disliked Eclipse due to it’s tendency to have problems with it’s workspace “.metadata” files over time, along with it’s slowdowns and/or freezing. However, after seeing a presentation about Mylyn I reconsidered. After looking up some other plugins, I was convinced that Eclipse is definitely worthy of a second look. What’s Mylyn you ask? In a nutshell: Mylyn is a task oriented plugin to Eclipse, giving you the benefit of saving what files & tabs you have open in Eclipse for a specific task. A task can be anything, a bug report in Bugzilla that you’re working on, or simply a powerpoint presentation (An example given in the presentation with Tasktop Pro, the fully featured task oriented desktop app from Tasktop Technologies).
Why am I reconsidering Eclipse? Well for starters:
It’s built on Java, so I won’t be tied to using Windows for my laptop forever (Eventually I’m looking into getting a Mac)
Mylyn allows integration with Bugzilla, along with a solution to my constant “too many tasks with too many tabs” problem.
It includes built-in task scheduling features, perfect to start training myself to do better time management.
Allows for developers to share “contexts” for each task (or bug) with one another, allowing for easy views on what parts of the code a bug/feature affects. Collaboration is made that much easier!
The PHP Development Tools (PDT) project gives PHP code completion, PHP debugging (once you install an apache server library), and all the other nice standard features of Eclipse. For the Apache module, you’ve got the choice of either the free & open source XDebug or the binary blob Zend Debugger.
The Subclipse plugin (Modern Git Repo) allows for nice integration with SVN (although I prefer git, I am forced to use for a couple projects). I was also familiar with using this plugin in my college’s Software Development class, where we used Eclipse & SVN to do Agile Java programming with many different teams over the course.
The Ajax Tools Framework (ATF) gives many of the features that the FireBug plugin for Firefox supports including: DOM Inspector, JavaScript Debugging, live CSS style editor, and all that good stuff. It does this by embedding Mozilla into Eclipse!
I’m really excited to start debugging PHP code on the server. Previously I’d been using jEdit, an SSH terminal, and Firefox to develop. This upgrade should improve my productivity a lot.
On Friday & had a quite eventful day involving a bunch of lucky and happy coincidences, along with an amazing spurt of ultra-productivity! Although it was an interesting day, that’s not what this post is about.
At one point, I was working on creating a CUE sheet for episode 004 and realized that GoldWave was clobbering all the PERFORMER attributes for every track in the original CUE sheet I imported! That was definitely no good, and really irritated me at the time.
My current workflow for CUE-ing a mix is as follows:
Export tracks from Traktor to a directory (ie: “~/Music/LyraPhase/004”), then make sure tracks are in order & named in the format: 01 – Artist – Trackname.mp3
Make a tracklist text file:
~/Music/LyraPhase/004$ ls -l --color=never > LyraPhase_004.txt
Run my magical script to generate a CUE file with empty INDEX points:
~/Music/LyraPhase/004$ tracklist2cue.pl LyraPhase_004.txt
NOTE: all tracks have initial cutpoints of 00:00:00
Opening tracklist file: LyraPhase_004.txt
Writing cue file to: ./LyraPhase_004.cue
Import the blank CUE file into GoldWave, listen, do audio processing stuff, then edit the track INDEX points.
Save the wav & CUE files.
Find out some extra stuff is gone after GoldWave got through with it 🙁
Enter perl:
So since I really like GoldWave otherwise, I decided to go dust off my monk robes & dive into perl again. The initial goal was to be able to read in the 2 CUE sheets, copy INDEX points from one to the other, and then save it again. I also have been thinking about other things in the future I may want to do with CUE sheets, so I decided to try to find some perl code to do what I wanted.
After a search, I found a module on CPAN called Audio::Cuefile::Parser which really didn’t do everything I wanted, or fully support the entire CUE file specification as per the documentation here.
After 1.5 days worth of hacking at it, I’ve successfully got 1/2 of the problem solved. So far my Audio::Cuefile::ParserPlus module will happily read in CUE sheets and print out the track information for you. The next step is to make a file output method, which should be simple now that the hard part of parsing in things via regex is finished ^_^
Current code snapshot can be found at my GitHub Repository
Happy Hacking ^_^
Here’s the first episode of the radio show! I finished (re-re-)recording it finally, and even took some time to make a cue sheet for it ^_^
I wanted to release it also in FLAC format on a separate podcast feed, however git’s current bugginess is preventing me from doing so at the moment. (see: http://www.josefsipek.net/blahg/?p=219 )
Tracklist:
01) Tarrentella/Redanka - Killerloop 'Organism'
02) Pig & Dan - Eiffel Nights (Original Mix)
03) Sultan & Ned Shepard Feat Stereomovers - Connected (Dub)
04) Stu Hirst - Big Rooms Bigger Tunes (Original)
05) Simon & Shaker - Zero (Original Mix)
06) Spider and Legaz - Psych
07) Twotrups - The Cello Track (Dub One Mix)
08) Sebastian Drums Tom Geiss and Eric G - Funky Beep (Original Mix)
09) Eelke Kleijn - Luigi's Magic Mushroom
10) Da'Other - Viva La Vida (Unplugged Mix)
11) Acquaviva & Maddox - Feedback (Valentino Kanzyani Earresistable Mix)
12) Eelke Kleijn - 8 Bit Era (Dub)
Thanks to the local dev server setup I have, along with svn and git, I’ve successfully and painlessly updated to wordpress 2.8.5. Pushing changes to the wordpress_base branch on my site is quite simple, as I don’t really plan on modifying the core wordpress code that much. Any changes to the code made by an svn update will only change files that I probably haven’t ever touched, so merging branches should be painless. Plus, the core wordpress code is tracked by svn, while both the core code and my changes are tracked by git. That way, I’ve got my own local branches that incorporate any updates made by svn, plus everything else.
In case you’re really interested and wondering how this is all done, see the following links:
Basically all you have to do is follow the wordpress update instructions from the 2nd link, but replace the svn switch command with the one found at the 1st link. The way to update in git is pretty smart, since all core wordpress code changes are tracked in the main wordpress_base branch, then updated via svn, put the changes into a new integration branch, then rebase the master branch onto that one & checkout the merged changes to the master branch.
Originally it seemed stupid to use svn to track the remote wordpress repo, however I tried using git’s svn capabilities and found that the only supported way to switch svn tags within git broke everything, so it’s actually better and more painless to use both CVS systems.
So I’ve finally managed to figure out the intricacies of tracking wordpress via SVN, while locally using git to track my entire site, plus any other changes made to wordpress locally. So everything should be working at least until the next wordpress upgrade, at which point I’ll hope my current upgrade process still works.
I should have some mixes from the show uploaded soon. I’m also hoping to switch to the carrington theme, but for now at least it’s working.
Oct 22 2012
Installing latest git on Ubuntu with git-subtree support
Recently I found it necessary to join two git repos together while still maintaining history & future ability to split or rejoin them. Enter the ‘
git subtree
‘ command.Because this command is now merged into git-core since 1.7.11, we will need to install the latest git from a PPA. Note that as of ‘now‘, the latest available package from the PPA we will use is 1.8.0, and it currently installs the subtree script to
/usr/share/doc/git/contrib/subtree
. However, since the Makefile there expects asciidoc.conf to be in../../Documentation/asciidoc.conf
, we must checkout the source package & make from there.I am using Ubuntu 12.04.1 LTS here.
Installing latest git-core + git-subtree
First add the PPA, update & upgrade. Then install git packages that are held back by apt. Also install asciidoc (optional if you want the manpage).
Next, install the source package & make git-subtree + manpage:
This may not work for you if you’re not using Ubuntu (your prefix or libexec dir may be different). If in doubt, get git-core from upstream and build it from there, or install the script to
$(git --exec-path)/git-subtree
andchmod 755
it (see Makefile & INSTALL doc incontrib/subtree
).Now you should be able to use the ‘
git subtree
‘ command. For help, run ‘git help subtree
‘ or ‘man git-subtree
‘.Some helpful examples of
git subtree
in use in the wild:Update 2012-10-22: Currently the package for git 1.8.0 from the PPA does not include the
git-prompt.sh
script. If you are using the__git_ps1
function in your bash prompt, you’ll need to get the source package and find that file in/home/jcuzella/src/git-core/git-1.8.0/contrib/completion/git-prompt.sh
. Install that file somewhere under your home directory and then add a line to your.bashrc
file to source it. You’ll know if you need it because you’ll probably see this message after installing latest git:By Administrator • Projects, Software • Tags: git, linux, programming, subtree, ubuntu