Quantum Cryptography Now Fast Enough For Video…

Researchers at the Cambridge Lab of Toshiba Research Europe have solved the problem of transferring highly sensitive data at high speed across a long distance network. The team were able to demonstrate the continuous operation of quantum key distribution (QKD) — a system that allows the communicating users to detect if a third party is trying to eavesdrop on the data communication — at a speed greater than one megabit/sec over a 50 km fibre optic network, thanks to the use of a light detector for high bit rates and a feedback system which maintains the high bit rates during data transfer. … The faster one megabit/sec data handling will allow the one-time pad to be used for the encryption of video — a vast step forward over the current ability to only encrypt voice data.

Nmap 5 released

Network security starts with scanning because you need to know what you have so that you can identify your vulnerable points and manage the associated risk.Nmap excels in helping you enumerate your network and identify what is running. Nmap is also a key tool in the fight against Conficker and its ilk and can be used to detect an infected node on a network.

With the release of Nmap 5, the first major release since 1997?, there is a noticeable speed advantage with faster scans. Aside from the speed improvements there are the new tools such as Ncat and Nmap Scripting Engine (NSE) that make Nmap 5 a must have.

  • “The new Ncat tool aims to be your Swiss Army Knife for data transfer, redirection, and debugging,” the Nmap 5.0 release announcement states.
  • NSE is all about automating network scanning task with scripts.”Those scripts are then executed in parallel with the speed and efficiency you expect from Nmap. All existing scripts have been improved, and 32 new ones added. New scripts include a whole bunch of MSRPC/NetBIOS attacks, queries, and vulnerability probes; open proxy detection; whois and AS number lookup queries; brute force attack scripts against the SNMP and POP3 protocols; and many more.”

Other “stuff” in this version…

  • ncat (allows data transfer, redirection and debugging) – (Remember hobbit’s nc ?)
  • ndiff scan comparison
  • better performance
  • improved zenmap GUI (including a real neat feature to visually map the network you have scanned)
  • Improvement of nmap scripting engine (nse), reviewed existing scripts and added 32 new scripts.

A useful if not must have tool. It not only applies to security, but also to simple things such as trying to find that pesky administrative interface to a WSS or MOSS environment when you cannot get access to the desktop… The more you have and know the better your options as they say.

http://nmap.org/5/
http://nmap.org/5/#5changes

Opera Unite – a perspective change from the centralized model used by SharePoint?

Opera Unite, a web browser melded with a web server. Now there’s a novel concept.

Opera Unite allows you to share your files, stream music, host sites, and communicate real time with people. The suite of services, that’s what they literally are, are comprehensive.

  • File Sharing
  • Photo Sharing
  • The Lounge
  • Fridge
  • Media Player
  • Web Server
  • and more…

But there’s a problem with it. A very big problem that I suspect Opera Marketing are all too aware of. Although Opera Unite claims to “directly link people’s personal computers together,” to use it you must have an account on Opera’s servers. Once you have that all of your exchanges pass through Opera’s servers first. Sure, that’s an effective way to get around technical difficulties such as NAT, firewalls, etc, but the big issue is that it makes Opera the intermediary in your social interactions — not Facebook, not MySpace, but Opera. Think it through. Stepping past all the hype, the benchmarks*, etc. you have just another lockin scenario. Opera is up you’re up. Sure your stuff is on your machine but it can only be accessed via Opera the domain.

Is there a way around this? Do we need a way around this? Yes, it would be possible to create a swarm and find your friends, but what happens when your computer is down and somebody wants to access your content. Nothing.

*Benchmarks

Excerpt from http://unitehowto.com/Performance below. Take them in context.

Opera Unite uses very smart file I/O! Even if you save data to file each request (simplest, but stupidest way to do it) – it still can push out very impressive 744 requests/second! (It probably means that this data is saved to memory and dumped only sometimes, smart move!)

It seems like Opera uses 13 threads (seems like a soft limit, but unchangeable). 13 concurrent connections max out @ 810req/s, 1.23ms processing time.

For comparison:

PHP+Apache(+MySQL) is almost 2 times faster than peak Unite performance.

Compiled C++ web server (MadFish WebToolkit ) is only 6 times faster than Opera Unite, but that is compiled raw C++.

nginx (one of the fastest Web Servers available) is only 5 times faster than Opera Unite (clocked at 4900 req/s in raw C++) “Welcome to nginx” cycle (no I/O or scripting).

Direct read/write access to NTFS formatted drives from OS X

Yesterday I had a need to not just read but write as well to an external USB NTFS formated drive. (I only run Photoshop on my Macs.) I found NTFS-3G to work, as usual, quite nicely. If you’re not familiar with it I would suggest heading over to their Q & A section for a few minutes.

http://www.ntfs-3g.org/support.html#questions

The NTFS-3G driver is a freely and commercially available and supported read/write NTFS driver for Linux, FreeBSD, Mac OS X, NetBSD, Solaris, Haiku, and other operating systems. It provides safe and fast handling of the Windows XP, Windows Server 2003, Windows 2000 and Windows Vista file systems.
NTFS-3G develops, quality tests and supports a trustable, feature rich and high performance solution for hardware platforms and operating systems whose users need to reliably interoperate with NTFS.
The driver is in STABLE status since 2007. It is used by millions of desktop computers, consumer devices for reliable data exchange, and referenced in more than
20 computer books. Please see our test methods and testimonials on the driver quality page.

ZFS: A gathering of quotes and links

A placeholder for stuff that I have found that was useful with some ZFS issues I have been facing.

pulled from: http://zfs.macosforge.org/trac/wiki/whatis

ZFS is a new kind of filesystem that provides simple administration, transactional semantics, end-to-end data integrity, and immense scalability. ZFS is not an incremental improvement to existing technology; it is a fundamentally new approach to data management. We’ve blown away 20 years of obsolete assumptions, eliminated complexity at the source, and created a storage system that’s actually a pleasure to use.
ZFS presents a pooled storage model that completely eliminates the concept of volumes and the associated problems of partitions, provisioning, wasted bandwidth and stranded storage. Thousands of filesystems can draw from a common storage pool, each one consuming only as much space as it actually needs. The combined I/O bandwidth of all devices in the pool is available to all filesystems at all times.
All operations are copy-on-write transactions, so the on-disk state is always valid. There is no need to fsck(1M) a ZFS filesystem, ever. Every block is checksummed to prevent silent data corruption, and the data is self-healing in replicated (mirrored or RAID) configurations. If one copy is damaged, ZFS will detect it and use another copy to repair it. ZFS introduces a new data replication model called RAID-Z. It is similar to RAID-5 but uses variable stripe width to eliminate the RAID-5 write hole (stripe corruption due to loss of power between data and parity updates). All RAID-Z writes are full-stripe writes. There’s no read-modify-write tax, no write hole, and — the best part — no need for NVRAM in hardware. ZFS loves cheap disks.
But cheap disks can fail, so ZFS provides disk scrubbing. Like ECC memory scrubbing, the idea is to read all data to detect latent errors while they’re still correctable. A scrub traverses the entire storage pool to read every copy of every block, validate it against its 256-bit checksum, and repair it if necessary. All this happens while the storage pool is live and in use.
ZFS has a pipelined I/O engine, similar in concept to CPU pipelines. The pipeline operates on I/O dependency graphs and provides scoreboarding, priority, deadline scheduling, out-of-order issue and I/O aggregation. I/O loads that bring other filesystems to their knees are handled with ease by the ZFS I/O pipeline.
ZFS provides unlimited constant-time snapshots and clones. A snapshot is a read-only point-in-time copy of a filesystem, while a clone is a writable copy of a snapshot. Clones provide an extremely space-efficient way to store many copies of mostly-shared data such as workspaces, software installations, and diskless clients.
ZFS backup and restore are powered by snapshots. Any snapshot can generate a full backup, and any pair of snapshots can generate an incremental backup. Incremental backups are so efficient that they can be used for remote replication — e.g. to transmit an incremental update every 10 seconds.
There are no arbitrary limits in ZFS. You can have as many files as you want; full 64-bit file offsets; unlimited links, directory entries, snapshots, and so on.
ZFS provides built-in compression. In addition to reducing space usage by 2-3x, compression also reduces the amount of I/O by 2-3x. For this reason, enabling compression actually makes some workloads go faster.
In addition to filesystems, ZFS storage pools can provide volumes for applications that need raw-device semantics. ZFS volumes can be used as swap devices, for example. And if you enable compression on a swap volume, you now have compressed virtual memory.
Sun just announced a series of open source storage appliances that use OpenSolaris and the ZFS file system. While the hardware includes some interesting options including solid state drives (SSD) for improving both read and write performance, the most alluring features are the file system and the analytics made available through SNIA-standard RPC calls, using the DTrace fault tracing system included in OpenSolaris. These features are not limited to Sun hardware, making it possible to duplicate the functionality with virtually any hardware.
[ Read the Test Center review of ZFS and view a screencast demo. ]Among the ZFS goodies are an interesting feature called Hybrid Storage Pools, which integrate DRAM, read-optimized SSDs, write-optimized SSDs, and regular disk into a seamless whole. The SSDs are intended to replace small, expensive, read and write caches with higher-capacity 18GB write-biased SSDs and 100GB read-biased SSDs to get exceptional performance at a cost that should be competitive with more basic storage systems.
Sun has done considerable work on the SSDs to avoid the typical issues of limited life spans, over-provisioning the storage and optimizing wear-leveling algorithms to ensure that the SSDs should last a minimum of three years. Given how quickly SSDs are dropping in price, this seems a more than adequate lifetime.
DTrace is used to make all sorts of performance data available. The admin can drill down by file system, type of data, type of interface, and other parameters, finding which application is using the most I/Os, or the most bandwidth — even the life left in the SSD drives in the system, which enables extremely granular optimization. No partnerships have been announced yet, but Sun is working with many storage, virtualization, and systems management vendors to ensure that data interchange works well.
Between the management capabilities, the clustering capabilities of ZFS, and the data services such as snapshots, cloning, mirroring, replication, compression, thin provisioning, and support for iSCSI, CIFS, NFS, HTTP, and FTP protocols, the Fishworks storage system offers a lot of potential. The Sun hardware should provide good capabilities at a good price. But the best part is that the software magic is also available through the open source OpenSolaris and ZFS, as long as the hardware

good detail here: http://www.techworld.com/storage/features/index.cfm?featureid=2744

great article herE: http://www.tech-recipes.com/rx/1446/zfs_ten_reasons_to_reformat_your_hard_drives/

======================================

Now if you’re interested in trying it out (and who after reading what it can do is not?) try the following links:

  • OSX – http://zfs.macosforge.org/trac/wiki
  • FreeBSD – FreeBSD 7.0 now has excellent ZFS support
  • Windows – oh boy… start reading again from the top.

Bothunter

In keeping with all things interesting FreeBSD I recently came across Bothunter.

http://www.bothunter.net/

BotHunter is a passive network monitoring tool designed to recognize the communication patterns of malware-infected computers within your network perimeter. Using an advanced infection-dialog-based event correlation engine (patent pending), BotHunter represents the most in-depth network-based malware infection diagnosis system available today.”

BotHunter is available free for both experimental operational use and to help stimulate research in understanding the life cycle of malware infections.”

It works across a rather broad range of systems, including FreeBSD, and is worth checkingout.

Linux – tested on Fedora, Red Hat Enterprise Linux, Debian, and SuSE distributions
FreeBSD – tested on Product Release 7.0
Mac OS X – tested on Tiger and Leopard, Mac OS 10.4 and 10.5
Windows XP – a self-installing Win32 executable is available and will install all necessary supporting packages
Live-CD – a self-booting ISO image of BotHunter operating on Ubuntu Linux