webcodr

Snapshot Tests With Jest

Writing tests can sometimes be a tedious task. Mocks and assertions can be a pain in the ass. The latter is especially nasty when HTML is involved. Give me the second p element from the 30th div within an article in aside etc. – no thanks.

The creators of Jest (Facebook) have found a better way: Snapshot tests!

How does it work?

Take a look the following assertion:

it('should create a foo bar object', () => {
  const result = foo.bar()
  expect(result).toMatchSnapshot()
})

toMatchSnapshot() takes what ever you give to expect(), serializes it and saves it into a file. The next test run will compare the expected value to the stored snapshot and will fail if they don’t match. Jest shows a nicely formatted error message and diff view on failed tests.

This is really useful with generated HTML and/or testing UI behaviour. Just call the method and let it compare to the snapshot.

Updating snapshots

You added something to your code and the snapshot has to be updated? No problem:

jest --updateSnapshot

If you’re using the Jest watcher it’s even simpler. Just press u to update all snapshots or press i to update the snapshots interactively.

What about objects with generated values?

Here’s an example with an randomized id:

it('should fail every time', () => {
  const ship = {
    id: Math.floor(Math.random() * 20),
    name: 'USS Defiant'
  }

  expect(ship).toMatchSnapshot()
})

The id will change on every test run, so this test will fail every time. Well, shit? Nope. Jest got you covered:

it('should create a ship', () => {
  const ship = {
    id: Math.floor(Math.random() * 20),
    name: 'USS Defiant'
  }

  expect(ship).toMatchSnapshot({
    id: expect.any(Number)
  })
})

Jest will now only compare the type of the id and the test will pass.

For certain objects like a date, there is another possibility:

Date.now = jest.fn(() => 1528902424828)

A call of Date.now() will call the mock method and always return the same value.

Some advice

  1. Always commit your snapshots! If they are missing,CI systems will always create new snapshots and the tests will become useless.

  2. Snapshot tests are an awesome tool, but don’t be too lazy. They are no replacement for other assertion types, especially if you’re working test-driven. Rather use them alongside with your other tests.

  3. Write meaningful test names. Well, you heard that one before, didn’t you? Really, it helps a a lot when tests fail or you have to look inside a snapshot file. Jest takes a test name as an id inside a snapshot file. That’s why you have to update a snapshot after changing the name.

Introducing DeliveryGuy

I like the Fetch API. It’s supported by all modern browsers, easy to use and has some really good polyfills for older devices. But Fetch has one major flaw: it will only throw errors if there is a network problem.

That’s a really odd decision from my point of view. Nearly any HTTP library out there throws errors or rejects the promise in case of a HTTP error.

A Fetch response object has the property ok to determine if the server responded with an error, but that’s not very comfortable to use.

Since my team and I have decided to use Fetch in a Vue-based web app, I decided to create a little wrapper for much more convenience. Say hello to DeliveryGuy.

Usage

Well, surprise, it’s a Node module, so just use any package manager you like. My personal choice is yarn.

yarn install delivery-guy

Example

import { deliverJson } from 'delivery-guy'

const getItems = async () => {
  try {
    return await deliverJson('/api/items')
  } catch (e) {
    console.error(e.message)
    console.log('HTTP Status', e.response.status)
    console.log('Response Body'. e.responseBody)
  }
}

const items = getItems()

What’s going on here?

DeliveryGuy exports two main functions:

  • deliver() will return a response promise like fetch() does.
  • deliverJson() presumes your response body contains JSON. It’s basically a shortcut and returns the promise of Response.json().

Both will accept the same two parameters as fetch() does and pass them along.

If the server responds with a HTTP error, DeliveryGuy will throw an error.

Due to the inheritance limitations of built-in classes with ES5 I mentioned in my last post, it’s only possible to set custom properties of a custom error class.

DeliveryGuy provides additional two properties on an error object:

  • response has the original response object of a Fetch call.
  • responseBody contains the response body and will try to parse it as JSON. If JSON.parse fails, it will return the response body in its original state.

TL;DR

DeliveryGuy allows you comfortably call the Fetch API without a hassle on HTTP errors. Just use try/catch and you’re done.

Please let me know on GitHub if you have feedback, a feature request or found a bug. Thank you!

Why custom errors in JavaScript with Babel are broken

Have you ever tried to write a custom error class in JavaScript? Well, it does work to a certain extend. But if you want to add custom methods or call instanceof to determine the error type it will not work properly.

Here is a little example of a custom error class:

class MyError extends Error {
  constructor(foo = 'bar', ...params) {
    super(...params)
    
    if (Error.captureStackTrace) {
      Error.captureStackTrace(this, MyError)
    }
    
    this.foo = foo
  }
  
  getFoo() {
    return this.foo
  }
}

try {
  throw new MyError('myBar')
} catch(e) {
  console.log(e instanceof MyError) // -> false
  console.log(e.getFoo()) // -> Uncaught TypeError: e.getFoo is not a function
}

Works fine in any browser with ES6/ES2015 support, but if you transpile the example with Babel to ES5 and execute the code, you will get the results shown in the comments.

Why

Due to limitations of ES5 it’s not possible to inherit from built-in classes like Error, see the Babel docs.

Possible solution

The docs mention a plug-in called babel-plugin-transform-builtin-extend to resolve this issue, but if you have to support older browsers it may not help. In order to work the plug-in needs support for __proto__. Take a guess which browser does not support __proto__ … and of course, it’s the web developers best friend aka Internet Explorer. Thankfully it affects only version 10 and below.

Workaround

If it’s not feasable to use the plug-in, you can at least access properties set in the constructor. A call to e.foo in the example is possible, but e instanceof MyError will return false, since you will always get an instance of Error.

Conclusion

Nothing of this ideal. We have to wait until it’s possible to use ES6/ES2015 directly. Yes, we all could set our transpile targets to ES6/ES2015 today, but our clients usually won’t allow it. Some customer is always browsing the web with an ancient device/browser.

Awesome tests with Vue and Jest

Jest is a very neat JavaScript testing library from Facebook. It’s mostly syntax-compatible with Jasmine and needs zero or very less configuration. Code coverage reports are there out-of-the-box and with sandboxed tests and snapshot testing it has some unique features.

Set-up Jest

Vue CLI

You are using Vue CLI? Consider yourself lucky, the set-up of Jest could not be simpler:

yarn add --dev jest @vue/cli-plugin-unit-jest
vue invoke unit-jest

Vue CLI will do the rest and also create an example spec for the HelloWorld component.

DIY

Install all necessary dependencies:

yarn add --dev @vue/test-utils babel-jest jest jest-serializer-vue vue-jest

Create jest.config.js in your project root directory:

module.exports = {
  moduleFileExtensions: ['js', 'json', 'vue'],
  transform: {
    '^.+\\.vue$': 'vue-jest',
    '^.+\\.js?$': 'babel-jest'
  },
  moduleNameMapper: {
    '^@/(.*)$': '<rootDir>/src/$1'
  },
  snapshotSerializers: ['jest-serializer-vue'],
  testMatch: ['<rootDir>/src/tests/**/*.spec.js']
}

Please adjust the paths in moduleNameMapper and testMatch to your project.

You should also add a modern JavaScript preset to your .babelrc file:

{
  "presets": ["es2015"]
}

Optional step

Add the following line to your .gitignore file:

/coverage

The set-up is now complete. Let’s write a test file!

Writing tests

Here is a litte example Vue component:

<template>
  <ul class="demo">
    <li class="demo__item" v-for="value in values" :key="value">{{ value }}</li>
  </ul>
</template>

<script>
export default {
  name: 'demo',
  data () {
    return {
      values: []
    }
  },
  created() {
    this.fetchValues()
  },
  methods: {
    async fetchValues() {
      const response = await fetch('/api/demo/values')
      this.values = await response.json()
    }
  }
}
</script>

The component Demo will fetch some values after its creation and display them in an unordered list. This is example is quite simple, but testing is a little more complex due to the usage of fetch, async and await.

Of course, there are some tools to help us:

yarn add --dev fetch-mock flush-promises

Now, let’s write a test:

import { shallow } from '@vue/test-utils'
import fetchMock from 'fetch-mock'
import flushPromises from 'flush-promises'
import Demo from '@/components/Demo.vue'

const values = [
  'foo',
  'bar'
]

describe('Demo.vue', () => {
  beforeEach(() => {
    fetchMock.get('/api/demo/values', values)
  })

  it('renders component', async () => {
    const wrapper = shallow(Demo)
    await flushPromises()

    expect(wrapper.vm.values).toEqual(values)
  })

  afterEach(() => {
    fetchMock.restore()
  })
})

What’s going on?

  1. shallow from Vue Test Utils creates a wrapper of the rendered and mounted component, any child components will be stubs. If you need child components in your test, please use mount instead of shallow.

  2. fetchMock will create a mocked version of the Fetch API. In this case it will return the defined values for a GET request to /api/demo/values. If you send a request that’s not defined in fetchMock, it will throw an exception and break your tests.

  3. The test itself is defined as async to use await for flushPromises(). It will wait until the mocked request is finished and the values are stored in the component’s data.

  4. You can now access the data property values and compare the content to the response of the mocked HTTP request.

Conclusion

Setting up Jest for Vue is easy, even if you have to do it manually.

A little warning: setting up Jest for an existing app can be tedious. The current AngularJS app of our customer can’t be tested with Jest, at least for now. The AngularJS HTTP mock does not work and I haven’t figured out the problem yet.

But enough of Angular: the real deal comes with the testing itself. Async/await is a nice and simple way for testing asynchronous behaviour. I don’t think this could be easier and it’s a reliable method with the power of modern JavaScript. Try to imagine what the demo test would look like in ES5 …

Vue Loader Setup in Webpack

Since it’s no option to use Vue CLI in my current project, I had to manually add vue-loader to the Webpack config. Well, I was positively surprised how simple it was. Especially compared to a manual set-up of Angular 2 shortly after its launch in late 2016, while Angular CLI was an alpha version and buggy as hell.

Shall we begin?

Modules

Let’s add the necessary modules to the package:

yarn add --dev vue-loader vue-template-compiler
yarn add vue

If you want to use a template engine like Pug or prefer TypeScript over JavaScript, you can add the respective Webpack loader package, pug-loader for example. Webpack will also tell you in detail, if modules are missing.

Webpack

Just add the following rule to your Webpack config:

{
  test: /\.vue$/,
  use: [
    {
      loader: 'vue-loader'
    }
  ]
}

Webpack will now be able to use import statements with Vue single file components.

If you want to have a separate JavaScript file with your Vue application, you can add a new entry point:

{
  entry: {,
    'current-application': [
      path.resolve(__dirname, 'js/current-application.js')
    ],
    'vue-application': [
      path.resolve(__dirname, 'js/vue-application.js')
    ]
  }
}

Not fancy enough? How about vendor chunks?

{
  optimization: {
    splitChunks: {
      chunks: 'all',
      cacheGroups: {
        vendor: {
          chunks: 'initial',
          test: /node_modules\/(?!(vue|vue-resource)\/).*/,
          name: 'vendor',
          enforce: true
        },
        'vue-vendor': {
          chunks: 'initial',
          test: /node_modules\/(vue|vue-resource)\/.*/,
          name: 'vue-vendor',
          enforce: true
        }
      }
    }
  }
}

This will separate your vendor files from node_modules into vendor.js and vue-vendor.js. The property test contains a regex to determine which modules should go into the vendor chunks.

Of course, this comes not even close to what Vue CLI can do. I highly recommend to use Vue CLI when it’s feasible. It’s quite easy to configure while not being dumbed down, has an excellent documentation and is very well maintained. At least for our customer’s set-up it would be difficult to use Vue CLI at the moment, but we are eager to migrate to Vue CLI asap.

Pimp Your Visual Studio Code

Like VS Code? Yeah, me too. Here are some very useful extensions and optical enhancements to get an even better experience with VS Code.

Extensions

Settings Sync

I love my Mac, but sometimes I have to do stuff (besides gaming) on my Windows PC as well. So, if I want to use the same config on both devices, I have to install my extensions and copy my config back and forth. That sucks. Settings Sync to the rescue! It syncs your settings, extensions etc. through the power of Gists on GitHub.

Download

Bookmarks

It’s quite simple: just bookmark certain lines of code you need often.

Download

EditorConfig for VS Code

Share your coding styles between editors, IDEs etc. on a per project basis. It’s really nice for teams as well.

Download

Bracket Pair Colorizer

Bracket madness? No more!

Download

Markdown All in One

Do you write a lot of Markdown? Here are some really useful helpers.

Download

Themes/Icons

Cobalt2

If you like darker and bluish color themes, give Cobalt2 a try. The color choices with a contrast between yellow and blue can reduce strain on your eyes.

Download

Material Icon Theme

Matches nicely with Cobalt2 or other darker themes. There are a ton of icons for almost any file type and even for many special folders like node_modules etc.

Download

But that’s not the end. VS Code can do much more, you can find some other very useful tips and tricks on vscodecandothat.com.

Dear GitHub

Look, I want to like Atom, but there are too many caveats. Atom is slow and resource hungry as hell, which is especially bad on a mobile device. It literally eats my MacBook’s battery away. VS Code is not perfect either, but the general experience is much better. So, please, get your shit together and improve Atom. We need more awesome editors on the market!

Let's Encrypt Wildcard Certificates with acme.sh and CloudFlare

A few weeks ago Let’s Encrypt finally launched ACME 2.0 with support of wildcard certificates. Woohoo!

Wait, what are wildcard certificates?

Wildcard certificates allow you to use multiple hostnames of your domain with one certificate. Without them you need a separate certificate for each host like foo.webcodr.io and bar.webcodr.io. A wildcard certificate can be issued for *.webcodr.io and that’s it. One certificate to rule them all.

Get started

My nginx example used certbot to issue certificates from Let’s Encrypt, but there’s a better tool: acme.sh

Acme.sh is written in Shell and can run on any unix-like OS. Since it’s also installed with a Shell script, there’s no need for a maintained package to get the latest features. Just run:

curl https://get.acme.sh | sh

That’s it. The install script will copy acme.sh to your home directory, create an alias for terminal use and create a cron job to automatically renew certificates.

DNS challenge

To issue a wildcard certificate ACME 2.0 allows only DNS-based challenges to verify your domain ownership. You can manage this manually, but challenge tokens will only work for 60 days, so you have to renew it every time a certificate expires.

Well, that sucks. But acme.sh has you covered. It supports the APIs of many DNS providers like CloudFlare, GoDaddy etc.

The following guide will show you how to use the CloudFlare API to automatically update the DNS challenge token. No CloudFlare? No problem, you can find examples for all supported DNS providers within the ache.sh docs.

Set-up CloudFlare

Login to CloudFlare and go to your profile. You’ll need the global API key.

Set your CloudFlare API key and your account email address as environment variables:

export CF_Key="sdfsdfsdfljlbjkljlkjsdfoiwje"
export CF_Email="you@example.com"

I recommend to put this environment variables into your .bashrc, .zshrc or in the respective file of your favorite shell.

Issue a wildcard certificate

acme.sh --issue --dns dns_cf -d "*.webcodr.io" -w "/what/ever/dir/you/like/*.webcodr.io"

Your new certificate will be ready soon and acme.sh will automatically renew it every 60 days. Just update your web-server configuration to the new path. I recommend also to create a cron-job reloading the web-server every night to load a renewed certificate.

Unclutter your ngnix config

If you manage multiple hosts within the same nginx, you can use include to put your TLS configuration in a separate file to avoid duplicates.

Create a separate file for your TLS configuration

File: /etc/nginx/tls-webcodr.io

ssl_certificate /home/webcodr/.acme.sh/*.webcodr.io/*.webcodr.io.cer;
ssl_certificate_key /home/webcodr/pi/.acme.sh/*.webcodr.io/*.webcodr.io.key;
ssl_trusted_certificate /home/webcodr/.acme.sh/*.webcodr.io/ca.cer;
ssl_session_timeout 1d;
ssl_session_cache shared:SSL:50m;
ssl_session_tickets off;
ssl_protocols TLSv1.1 TLSv1.2;
ssl_ciphers ECDHE-RSA-AES256-GCM-SHA512:DHE-RSA-AES256-GCM-SHA512:ECDHE-RSA-AES256-GCM-SHA384:DHE-RSA-AES256-GCM-SHA384:ECDHE-RSA-AES256-SHA384;
ssl_ecdh_curve secp384r1;
ssl_prefer_server_ciphers on;
add_header Strict-Transport-Security max-age=15768000;
ssl_stapling on;
ssl_stapling_verify on;

Update your site configuration

File: /etc/nginx/sites-enabled/webcodr.io

server {
  listen 80;
  server_name webcodr.io;

  location / {
    proxy_pass http://10.0.0.2:80;
  }
}

server {
  listen 443 ssl http2;
  server_name webcodr.io;
  ssl on;

  include tls-webcodr.io;

  location / {
    proxy_pass https://10.0.0.2:443;
  }
}

Repeat this for every site on a host on this domain and reload nginx. And you’re done. Just one certificate and TLS config for all your sites. Pretty neat, huh?

Telekom VDSL MTU und MSS Clamping für IPv4 und IPv6

To English Version

Wer schon mal in die USA geflogen ist, kennt sicher das Electronic System for Travel Authorization (ESTA). Jeder Fluggast, der in die USA einreist, dort umsteigt oder sie sogar nur überfliegt, muss sich dort anmelden und eine Erlaubnis einholen. Dieser Spaß kostet 14 US-Dollar und wird via pay.gov bezahlt.

Als ich letztes Jahr für eine Reise in USA einen ESTA-Antrag gestellt habe, war pay.gov für mich über die Telekom nicht erreichbar. Via Mobilfunk ging es merkwürdigerweise problemlos, also lag für mich das Problem bei der Telekom. Ein kurzer Austausch mit @telekom_hilft bei Twitter brachte leider keine Besserung, da der Zugriff auf pay.gov für den Support-Mitarbeiter einwandfrei funktionierte.

Ich hatte das Problem nicht groß weiter verfolgt, aber immer wieder mal ausprobiert, ob die Seite erreichbar ist. Bisher war das nie der Fall und es hat mich dann in den letzten Tagen der Ehrgeiz gepackt, endlich die Ursache zu finden.

Nach ein paar Google-Recherchen stand schnell fest, dass die Probleme nur mit IPv6 auftreten. Über IPv4 ging alles einwandfrei. IPv6 kam für mich als Ursache nicht in Frage, weil die DNS-Einträge für pay.gov keinen AAAA Resource Record aufweisen. Da man im Browser den Request nicht im Inspector verfolgen kann, weil die Verbindung bzw. der TLS Handshake gar nicht erst aufgebaut werden konnten, musste Wireshark ran.

Siehe da, der Zugriff erfolgt über IPv6 und die Pakete von pay.gov werden verworfen, weil sie ungültig sind. Nach weiteren Google-Suchen stand fest, dass die Ursachen mit der Maximum Transfer Unit (MTU) bzw. der Maximum Segment Size (MSS) zusammenhängen können. Die MTU-Größe liegt bei DSL-Zugängen bei 1.492 Byte: maximale MTU-Größe abzüglich PPPoE-Header-Größe, also 1.500 Byte - 8 Byte. Zur Berechnung des Schwellenwertes für MSS Clamping, wird von der tatsächlich MTU-Größe zusätzlich die maximale TCP/IP-Header-Größe abzogen: 1.492 Byte - 40 Byte = 1.452 Byte

Ich hatte daher die MTU- und MSS-Werte im EdgeRouter überprüft und sie waren in Ordnung. Nach weiteren Recherchen war klar, dass der EdgeRouter den Wert für MSS Clamping standardmäßig nur für IPv4 einstellt, da der Wert für IPv6 separat eingestellt werden muss. Gesagt, getan. Und? Nichts, der Verbindungsaufbau misslingt nach wie vor.

Blöderweise hatte ich vergessen, dass TCP/IPv6-Header eine maximale Länge von 60 Byte haben können und daher die MSS auf 1.432 Byte gestellt werden muss. Kaum war das erledigt, ließ sich pay.gov problemlos aufrufen.

Dieses Problem betrifft offenbar alle Seiten der US-Regierung unter der TLD .gov, kann aber auch bei anderen Websites auftreten. Es muss nicht zwangsläufig zum Totalausfall führen, da ein Client via ICMP mitteilen kann, dass der Server die Pakete weiter fragmentieren soll. In diesem konkreten Fall funktioniert das aber nicht korrekt, weil die Infrastruktur von pay.gov jegliche ICMP-Pakete ignoriert. Solche Maßnahmen werden oft getroffen, um kritische Systeme vor DDoS-Attacken zu schützen.

MSS Clamping ist eine Art Hack, um dieses Problem anderweitig zu lösen. Paket-Header werden bei der Verarbeitung im Router entsprechend angepasst und so dem Server mitgeteilt, dass er bitte kleinere Pakete als Antwort schicken soll.

So sehr ich die Motivation verstehe, ICMP-Pakete aus Sicherheitsgründen abzuweisen, ist es in diesem Fall eine ungünstige Entscheidung, da IPv6 für seine Funktionen wesentlich stärker von ICMP abhängig ist, als es IPv4 je war.

TL;DR

Sollte jemand merkwürdige Probleme mit nicht aufrufbaren Websites haben oder immer wieder bestimmte Seiten beim ersten Laden sehr lange brauchen, z.B. weil der TLS-Handshake ungewöhnlich viel Zeit beansprucht, kann MSS Clamping Abhilfe schaffen.

Für IPv4 sollte es der EdgeRouter bereits richtig einstellen. Für IPv6 lässt es sich, wie üblich, nur via CLI konfigurieren:

set firewall options mss-clamp6 interface-type pppoe
set firewall options mss-clamp6 mss 1432

Für IPv4 sollte es so aussehen:

ubnt@ubnt# show firewall options mss-clamp
 interface-type pppoe
 mss 1452

Statt dem Interface Type pppoe kann auch all aktiv sein, das betrifft neben PPPoE dann z.B. auch relevanten VPN-Protokolle.

Commit und speichern, Problem gelöst.

Hinweis: Die MSS-Werte beziehen sich nur auf DSL-Verbindungen. Bei allen anderen Verbindungstypen, die kein PPPoE nutzen (z.B. Kabel), muss man die 8 Byte für den PPPoE-Header nicht abziehen und kommt auf auf 1.460 bzw. 1.440 Byte.

Summary in English

Do you have strange loading problems with certain websites? They do not work at all or the first request takes a long time? Then consider MSS Clamping as a possible solution.

In my case, pay.gov was impossible to reach via IPv6. If the server sends too big packets, your system can’t process them correctly. Wireshark will help you to detect such faulty packets.

MSS clamping will alter the packet headers within your router to tell the server the max allowed packet size without the usage of ICMP. It’s kind of a hack, but it works fine and is sometimes the only solution, if the server blocks ICMP, as US government websites do.

In case of IPv4, the EdgeRouter’s wizards did already all the work for you or you can use the MSS clamping GUI wizard after a manual configuration. Unfortunately and as always, there are no ways to configure any IPv6 features via the GUI. You have to rely on the CLI instead:

set firewall options mss-clamp6 interface-type pppoe
set firewall options mss-clamp6 mss 1432

Your IPv4 MSS clamping config should look like this:

ubnt@ubnt# show firewall options mss-clamp
 interface-type pppoe
 mss 1452

Beware: the above mentioned values are for DSL connections using PPPoE and are based on the following calculations:

  • MTU: 1,500 bytes - 8 bytes (max. allowed MTU size - PPPoE header size) = 1,492 bytes
  • MSS: 1,492 bytes (MTU value) - 40 bytes (TCP/IPv4 header size) or 60 bytes (TCP/IPv6 header size) = 1,452 bytes (IPv4) or 1,432 bytes (IPv6)

For other connection types without PPPoE, you don’t have to subtract the PPPoE header size. Cable connections should work fine with 1,460 bytes for IPv4 and respectively 1,440 bytes for IPv6.

Welcome to webcodr.io

webcodr.de will be redirecting all traffic automatically to webcodr.io until 7 april 2018.

If you have bookmarked a post, please adjust the top-level domain of the URL to .io. Thank you!

Interface monitoring with Wireshark on an EdgeRouter

Here is just a neat little trick to use Wireshark for monitoring interfaces on your EdgeRouter. This is incredibly useful for debugging purposes.

The following commands work on macOS or a Linux distribution only.

ssh user@egderouter_ip 'sudo tcpdump -f -i eth0 -w -' | wireshark -k -i -

If you’re monitoring the interface with your SSH connection to the EdgeRouter, you may want to ignore traffic on port 22.

ssh user@egderouter_ip 'sudo tcpdump -f -i eth1 -w - not port 22' | wireshark -k -i -

Update for Windows users

After some fiddling around, I found a working solution for Windows 10 users. You just have to install the SSH client beta and Wireshark for Windows.

ssh user@egderouter_ip "sudo tcpdump -f -i eth0 -w -" | "C:\Program Files\Wireshark\Wireshark.exe" -k -i -

Troubleshooting advice:

  • The Windows SSH client will only work on command shells with admin privileges.
  • Use only double quotes. The Windows command line doesn’t like single quotes as well as a shell on unixoid operating systems.
  • Adjust the path to Wireshark if it’s not installed in the default directory.
  • CTRL + C or CTRL + X will not work to terminate the SSC connection. You have to close the window instead.