Sunday, November 01, 2015

Innatech RG4332


So, I'm a bit bored tonight... I scan my network just for fun and found out something about the unifi router.


So I tried login with root, guess the password.

login as: root
root@192.168.0.1's password:


BusyBox v1.6.1 (2013-12-23 17:22:03 HKT) built-in shell (ash)
Enter 'help' for a list of built-in commands.

Well, that is easy.

# cat /etc/shadow
#root:$1$BOYmzSKq$ePjEPSpkQGeBcZjlEeLqI.:13796:0:99999:7:::
root:$1$BOYmzSKq$ePjEPSpkQGeBcZjlEeLqI.:13796:0:99999:7:::
#tw:$1$zxEm2v6Q$qEbPfojsrrE/YkzqRm7qV/:13796:0:99999:7:::
#tw:$1$zxEm2v6Q$qEbPfojsrrE/YkzqRm7qV/:13796:0:99999:7:::

Looks like this password hashes have been recycled a few times in router firmware.
check it out. Not a surprise I guess. More interesting bits below:

# ps
  PID USER       VSZ STAT COMMAND
    1 root      1584 S    init
    2 root         0 SW<  [kthreadd]
    3 root         0 SW<  [ksoftirqd/0]
    4 root         0 SW<  [watchdog/0]
    5 root         0 SW<  [events/0]
    6 root         0 SW<  [khelper]
    9 root         0 SW<  [async/mgr]
   74 root         0 SW<  [kblockd/0]
   84 root         0 SW<  [khubd]
  101 root         0 SW   [khungtaskd]
  102 root         0 SW   [pdflush]
  103 root         0 SW   [pdflush]
  104 root         0 SW<  [kswapd0]
  106 root         0 SW<  [crypto/0]
  663 root         0 SW<  [mtdblockd]
  726 root      4428 S    /usr/sbin/mini_httpd -d /usr/www -c /cgi-bin/* -u ro
  730 root      2632 S    /usr/bin/pc
  732 root      1588 S    -/bin/sh
  733 root      4964 S    /usr/bin/logic
  734 root      2560 S    /usr/bin/ip6mon
  735 root      2564 S    /usr/bin/ramon
  736 root      2576 S    /usr/bin/ip6aac
  742 root      1592 S    /usr/sbin/inetd
  744 root      2224 S    /usr/sbin/dropbear
 1412 root      2612 S    /usr/sbin/pppd plugin rp-pppoe.so eth5 user
 1420 root      1204 S    /sbin/udhcpc -i eth8 -m 1500 -f
 1534 root      1984 S    /usr/sbin/dhcp6c -c /var/dhcpv6/dhcp6c_301203713 -f
 1698 root      1204 S    /sbin/miniupnpd -f /etc/upnpd.conf -d
 1921 root      1208 S    /usr/sbin/radvd -C /var/radvd.conf -d 1
 1967 root      2040 S    /usr/sbin/dhcp6s -c /var/dhcpv6/br0.conf -f br0
 2007 root      1320 S    /sbin/dproxy -c /etc/dproxy.conf -d
 2124 root      1244 S    /sbin/udhcpd /var/udhcpd.confge
 6840 root      2280 R    /usr/sbin/dropbear
 6841 root      1592 S    -sh

 6872 root      1584 R    ps

# pwd
/var/log

# cat device_info
Manufacturer: innacomm
ProductClass: RG4332
SerialNumber: RGWINNIN15********
IP: 192.168.0.1
HWVer: RTL8196C
SWVer: RG4332_V2.7.0

There is a samba config file in /etc, but when I try to connect it doesn't work. Not sure what is the purpose of it.


# cat smb.conf

[global]
workgroup = home
netbios name = dsl_route
server string = Samba Server
security = user
local master = Yes
preferred master = Yes
encrypt passwords = yes
smb passwd file = /var/smbpasswd
#private dir = /tmp/smbvar
socket options = TCP_NODELAY
wins proxy = no
log level = 10
load printers = no
guest account = root
log file = /var/log/smblog
max log size = 0
interfaces = 192.168.1.1/255.255.255.0
dns proxy = no
browseable = yes
guest ok = yes
writeable = no

display charset = utf8
unix charset = utf8
dos charset = utf8

public = yes

[usb1_1]
path = /mnt/usb1_1
writeable = yes
browseable = yes
directory mask = 0777
create mask = 0777

I'm getting sleepy, so I'll continue this next time I hope... Bai for now.




Thursday, March 05, 2015

vagrant in Windows 7

step 1: download vagrant installer from https://www.vagrantup.com/downloads.html.
step 2: read below:

Microsoft Windows [Version 6.1.7601]
Copyright (c) 2009 Microsoft Corporation.  All rights reserved.

C:\Users\toyol>vagrant -v
Vagrant 1.7.2

C:\Users\toyol>vagrant -h
Usage: vagrant [options] []

    -v, --version                    Print the version and exit.
    -h, --help                       Print this help.

Common commands:
     box             manages boxes: installation, removal, etc.
     connect         connect to a remotely shared Vagrant environment
     destroy         stops and deletes all traces of the vagrant machine
     global-status   outputs status Vagrant environments for this user
     halt            stops the vagrant machine
     help            shows the help for a subcommand
     init            initializes a new Vagrant environment by creating a Vagrantfile
     login           log in to HashiCorp's Atlas
     package         packages a running vagrant environment into a box
     plugin          manages plugins: install, uninstall, update, etc.
     provision       provisions the vagrant machine
     push            deploys code in this environment to a configured destination
     rdp             connects to machine via RDP
     reload          restarts vagrant machine, loads new Vagrantfile configuration
     resume          resume a suspended vagrant machine
     share           share your Vagrant environment with anyone in the world
     ssh             connects to machine via SSH
     ssh-config      outputs OpenSSH valid configuration to connect to the machine
     status          outputs status of the vagrant machine
     suspend         suspends the machine
     up              starts and provisions the vagrant environment
     version         prints current and latest Vagrant version

For help on any individual command run `vagrant COMMAND -h`

Additional subcommands are available, but are either more advanced
or not commonly used. To see all subcommands, run the command
`vagrant list-commands`.

C:\Users\toyol>vagrant plugin install vagrant-hostmanager
Installing the 'vagrant-hostmanager' plugin. This can take a few minutes...
Bundler, the underlying system Vagrant uses to install plugins,
reported an error. The error is shown below. These errors are usually
caused by misconfigured plugin installations or transient network
issues. The error from Bundler is:

Could not fetch specs from http://gems.hashicorp.com/

Retrying source fetch due to error (2/3): Bundler::HTTPError Could not fetch specs from http://gems.hashicorp.com/Retrying source fetch due to error (3/3): Bundler::HTTPError Could not fetch specs from http://gems.hashicorp.com/

C:\Users\toyol>SET http_proxy=http://cacing-tanah.com:8080

C:\Users\toyol>vagrant plugin install vagrant-hostmanager
Installing the 'vagrant-hostmanager' plugin. This can take a few minutes...
Installed the plugin 'vagrant-hostmanager (1.5.0)'!

C:\Users\toyol>vagrant box add relativkreativ/centos-7-minimal
==> box: Loading metadata for box 'relativkreativ/centos-7-minimal'
    box: URL: https://atlas.hashicorp.com/relativkreativ/centos-7-minimal
==> box: Adding box 'relativkreativ/centos-7-minimal' (v1.0.3) for provider: virtualbox
    box: Downloading: https://vagrantcloud.com/relativkreativ/boxes/centos-7-minimal/versions/1.0.3/providers/virtualbox.box
    box: Progress: 100% (Rate: 5646k/s, Estimated time remaining: --:--:--)
==> box: Successfully added box 'relativkreativ/centos-7-minimal' (v1.0.3) for 'virtualbox'!

C:\Users\toyol>vagrant init relativkreativ/centos-7-minimal
A `Vagrantfile` has been placed in this directory. You are now
ready to `vagrant up` your first virtual environment! Please read
the comments in the Vagrantfile as well as documentation on
`vagrantup.com` for more information on using Vagrant.

C:\Users\toyol>vagrant up
Bringing machine 'default' up with 'virtualbox' provider...
==> default: Importing base box 'relativkreativ/centos-7-minimal'...
==> default: Matching MAC address for NAT networking...
==> default: Checking if box 'relativkreativ/centos-7-minimal' is up to date...
==> default: Setting the name of the VM: toyol_default_1425544554065_14802
==> default: Clearing any previously set network interfaces...
==> default: Preparing network interfaces based on configuration...
    default: Adapter 1: nat
==> default: Forwarding ports...
    default: 22 => 2222 (adapter 1)
==> default: Booting VM...
==> default: Waiting for machine to boot. This may take a few minutes...
    default: SSH address: 127.0.0.1:2222
    default: SSH username: vagrant
    default: SSH auth method: private key
    default: Warning: Connection timeout. Retrying...
    default: Warning: Connection timeout. Retrying...
    default:
    default: Vagrant insecure key detected. Vagrant will automatically replace
    default: this with a newly generated keypair for better security.
    default:
    default: Inserting generated public key within guest...
    default: Removing insecure key from the guest if its present...
    default: Key inserted! Disconnecting and reconnecting using new SSH key...
==> default: Machine booted and ready!
==> default: Checking for guest additions in VM...
==> default: Mounting shared folders...
    default: /vagrant => C:/Users/toyol

C:\Users\toyol>vagrant ssh-config
Host default
  HostName 127.0.0.1
  User vagrant
  Port 2222
  UserKnownHostsFile /dev/null
  StrictHostKeyChecking no
  PasswordAuthentication no
  IdentityFile C:/Users/toyol/.vagrant/machines/default/virtualbox/private_key
  IdentitiesOnly yes
  LogLevel FATAL


C:\Users\toyol>

step 3: get the private_key from above ssh-config output and save it to your ssh key path. If you're using putty, need to convert it to ppk file first (http://meinit.nl/using-your-openssh-private-key-in-putty)

login as: vagrant
Authenticating with public key "imported-openssh-key"
Passphrase for key "imported-openssh-key":
Last login: Tue Dec 16 09:59:48 2014
[vagrant@localhost ~]$ uptime
 09:45:14 up 8 min,  1 user,  load average: 0.00, 0.10, 0.11

Remove a cluster node in HPUX ServiceGuard

#

1- Make sure all packages run on primary node. In this case you want to remove clnode40 from cluster.

clnode40/root/home/root#cmviewcl

CLUSTER        STATUS       
ssdd_cluster    up           
  
  NODE           STATUS       STATE        
  clnode40       up           running      
  clnode53       up           running      

    PACKAGE        STATUS           STATE            AUTO_RUN    NODE        
    DBssdd          up               running          enabled     clnode53    
    NFSssdd         up               running          enabled     clnode53    
    CIssdd          up               running          enabled     clnode53    
    rsyncssdd       up               running          enabled     clnode53    

2- Halt secondary node. I'm halting the node from inside of the node itself, but I'm suggesting to halt it from the primary node. Because later you need to remove the node and this can't be done from the node to be removed.
clnode40/etc/cmcluster/ssdd#cmhaltnode -f clnode40
Disabling all packages from starting on nodes to be halted.
Warning:  Do not modify or enable packages until the halt operation is completed.
Waiting for nodes to halt ..... done
Successfully halted all nodes specified.
Halt operation complete.

clnode40/etc/cmcluster/ssdd#cmviewcl

CLUSTER        STATUS      
ssdd_cluster    up          

  NODE           STATUS       STATE      
  clnode40       down         halted      
  clnode53       up           running    

    PACKAGE        STATUS           STATE            AUTO_RUN    NODE      
    DBssdd          up               running          enabled     clnode53  
    NFSssdd         up               running          enabled     clnode53  
    CIssdd          up               running          enabled     clnode53  
    rsyncssdd       up               running          enabled     clnode53  

3- pull current cluster/packages config, these command will save the config in the file name you specified.
cmgetconf -p rsyncssdd rsyncssdd.conf
cmgetconf -c ssdd _cluster ssdd _cluster.conf

4- Remove the reference line for the departing node. The reference line begins with the string:

NODE_NAME (in cluster config)
or
node_name (in package config)

5- If removing a node from the cluster results in a one-node cluster, remove all lines containing the references that follow from the cluster ASCII file:

FIRST_CLUSTER_LOCK_VG
FIRST_CLUSTER_LOCK_PV

The lock disk function is not used in a one-node cluster.

6- apply the new modified config
clnode53/etc/cmcluster/ssdd#cmapplyconf -C ssdd _cluster.conf -P DBssdd.conf -P NFSssdd.conf -P CIssdd.DBssdd.conf -P NFSssdd.conf -P CIssdd.conf -P rsyncssdd.conf
MAX_CONFIGURED_PACKAGES configured to 300.
MAX_CONFIGURED_PACKAGES configured to 300.
NFSssdd.conf:490: service_halt_timeout value of 0 is equivalent to 1 sec.
Modifying the cluster locking mechanism from lvm to majority while cluster ssdd_cluster is running.
Deleting FIRST_CLUSTER_LOCK_PV /dev/disk/disk98 from node clnode53 while cluster is running.
Removing configuration from node clnode40
Modifying configuration on node clnode53
Deleting node clnode40 from cluster ssdd_cluster

Modify the cluster configuration ([y]/n)? y
Completed the cluster creation

7- verify node is not visible in cmviewcl
clnode53:/etc/cmcluster/ssdd# cmviewcl

CLUSTER        STATUS      
ssdd_cluster    up          

  NODE           STATUS       STATE      
  clnode53       up           running    

    PACKAGE        STATUS           STATE            AUTO_RUN    NODE      
    DBssdd          up               running          enabled     clnode53  
    NFSssdd         up               running          enabled     clnode53  
    CIssdd          up               running          enabled     clnode53  
    rsyncssdd       up               running          enabled     clnode53  

8- Set AUTOSTART_CMCLD=0 in /etc/rc.config.d/cmcluster. Set AUTO_VG_ACTIVATE=1 in /etc/lvmrc.

Command outputs are from me, cluster commands are lifted from HP site: http://support.hp.com/us-en/document/c01058926


#WHATYEARISTHIS??