Restarting ESX Agents
# Restart the VMware ESXi host daemon and VMware vCenter Agent services using these commands:
/etc/init.d/hostd restart
/etc/init.d/vpxa restart
# To restart all ESXi management agents on the host, run the command:
services.sh restart
ESX Host Management
# Enter maintenance mode
for non vsan based host
esxcli system maintenanceMode set --enable true
# Exit maintenance mode
~ # esxcli system maintenanceMode set --enable=false --timeout=5
# for VSAN esxi host
esxcli system maintenanceMode set --enable yes --vsanmode ensureObjectAccessibility
get maintenance mode status
esxcli system maintenanceMode get
Stop and Start/Restart vCSA Services
# Run this command to list the vCenter Server 7 Appliance services:
service-control ālist
# To view the current status of the vCenter Server 7 Appliance services, type the command:
service-control āstatus
# Start a service:Ā
service-control āstart servicename.
If you want to stop service:Ā
service-control āstop servicename.
# Start or stop all vCenter 7 services:Ā
service-control āstart āall servicenameĀ or to stop all, it will beĀ service-control āstop āall servicename.
# To check the status:
service-control āstatus āall
# ToĀ start all services:
service-control āstart āall
# To stop all services:
service-control āstop āall
vCenter Services List
Service Name | Description |
---|---|
vmware-vmon | VMware Service Lifecycle Manager |
vmonapi | VMware Service Lifecycle Manager API |
vmafdd | VMware Authentication Framework |
vmdird | VMware Directory Service |
vmcad | VMware Certificate Service |
lookupsvc | VMware Lookup Service |
vmware-sca | VMware Service Control Agent |
vmware-stsd | VMware Security Token Service |
vmware-rhttpproxy | VMware HTTP Reverse Proxy |
vmware-envoy | VMware Envoy Proxy |
vmware-netdumper | VMware vSphere ESXi Dump Collector |
vmware-vapi-endpoint | VMware vAPI Endpoint |
vmware-vpxd-svcs | VMware vCenter-Services |
vmware-perfcharts | VMware Performance Charts |
applmgmt | VMware Appliance Management Service |
vmware-statsmonitor | VMware Appliance Monitoring Service |
vmware-cis-license | VMware License Service |
vmware-vpostgres | VMware Postgres |
vmware-postgres-archiver | VMware Postgres Archiver |
vmware-vdtc | VMware vSphere Distrubuted Tracing Collector |
vmware-vpxd | VMware vCenter Server |
vmware-eam | VMware ESX Agent Manager |
vmware-vsm | VMware vService Manager |
vmware-sps | VMware vSphere Profile-Driven Storage Service |
pschealth | VMware Platform Services Controller Health Monitor |
vmware-rbd-watchdog | VMware vSphere Auto Deploy Waiter |
vmware-content-library | VMware Content Library Service |
vmware-imagebuilder | VMware Image Builder Manager |
lwsmd | Likewise Service Manager |
vmcam | VMware vSphere Authentication Proxy |
vmware-vcha | VMware vCenter High Availability |
vmware-updatemgr | VMware Update Manager |
vmware-vsan-health | VMware VSAN Health Service |
vsphere-ui | VMware vSphere Client |
vmware-hvc | VMware Hybrid VC Service |
vmware-trustmanagement | VMware Trust Management Service |
vmware-certificatemanagement | VMware Certificate Management Service |
vmware-certificateauthority | VMware Certificate Authority Service |
vmware-pod | VMware Patching and Host Management Service |
vlcm | VMware vCenter Lifecycle API |
vmware-analytics | VMware Analytics Service |
vmware-topologysvc | VMware Topology Service |
vmware-infraprofile | VMware Infraprofile Service |
wcp | Workload Control Plane |
vtsdb | VMware vTsdb Service |
vstats | VMware vStats Service |
observability | VMware VCSA Observability Service |
observability-vapi | VMware VCSA Observability VAPI Service |
Virtual Machine
# List all VMs in the ESXi host
vim-cmd vmsvc/getallvms
# It will list VM's vm-id
# Get VM running state by using the VM-id
vim-cmd vmsvc/power.getstate <vm-id>
# To gracefully shutdown the VM
vim-cmd vmsvc/power.shutdown <vm-id>
# To forcely power off the vm - unplug the power
vim-cmd vmsvc/power.ff <vm-id> # Only when fail to perperly shutdown VM
# list of VMs registered on hosts
cat etc/vmware/hostd/vmInventory.xml
56 /vmfs/volumes/vsan:528e24c4fab25460-bbc1e9f77409188b/1518385d-742b-749d-14dc-08f1ea8c406e/Server2016.vmx
# Register VM that is in inaccesible status on host
vim-cmd solo/registervm /vmfs/volumes/vsan\:528e24c4fab25460-bbc1e9f77409188b/1518385d-742b-749d-14dc-08f1ea8c406e/Server2016.vmx
# Get all VMs on host
vim-cmd vmsvc/getallvms
# Get VM power status
vim-cmd vmsvc/power.getstateVMID
# shutdown VM
vim-cmd vmsvc/power.shutdown VMID
# power off
vim-cmd vmsvc/power.off VMID
# Check services status
service-control --status
ESXCLI Drivers – Check the compatibility in the VMware HCL list.
# How to find NIC information:
esxcli network nic list
# Dislay info of network card
esxcli network nic get -n vmnic0
# Locate this information:
Adapter name: NC553i
Driver: elxnet
Driver version: 10.5.121.7
Firmware level: 10.2.340.19
# Check the compatibility in the VMware HCL list for all cards
vmkchdev -l |grep vmnic
0000:62:00:4:1137:0045:1137:012c vmkernel vmhba1
# How to find HBA information:
esxcfg-scsidevs -a
or
/usr/lib/vmware/vmkmgmt_keyval/vmkmgmt_keyval -d
How to find HBA information of an adapter:
/usr/lib/vmware/vmkmgmt_keyval/vmkmgmt_keyval -l -i vmhba4/qlogic
or
esxcli storage core adapter list
Locate this information:
Name: QMH2562
Firmware 8.01.02
Flash firmware level: 7.03.00
Bios: 3.24
Driver version: 2.1.27.0
To locate driver info
vmkload_mod -s HBADriver | grep Version
Version: 4.0.0.70-i.vmw.703.0.20.19129100
check the compatibility in the VMware HCL list.
vmkchdev -l |grep vmhba1
0000:62:00:4:1137:0045:1137:012c vmkernel vmhba1
Locate this information in http://www.vmware.com/resources/compatibility/search.php?deviceCategory=io
VID = 1137
DID = 0045
SVID = 1137
SDID = 012c
ESXCLI storage
# Rescan for new storage on all adapters
esxcfg-rescan --all
# List Storage adapters
esxcli storage core adapter list
# Determine the driver type that the Host Bus Adapter is currently using
esxcfg-scsidevs -a
vmhba0 pvscsi link-n/a pscsi.vmhba0 (0000:03:00.0) VMware Inc. PVSCSI SCSI Controller
vmhba1 vmkata link-n/a ide.vmhba1 (0000:00:07.1) Intel Corporation PIIX4 for 430TX/440BX/MX IDE Controller
vmhba64 vmkata link-n/a ide.vmhba64 (0000:00:07.1) Intel Corporation PIIX4 for 430TX/440BX/MX IDE Controller
# Determine driver version details for HBA controller
vmkload_mod -s pvscsi
# Search for new VMFS datastores
vmkfstools -V
# List of VMFS snapshots
esxcli storage vmfs snapshot list
Based on VMFS UUID mount snapshot
esxcli storage vmfs snapshot mount -u "aaaa-aaaa-aaaa-aaa"
# when original datastore is still online we need resignature snapshot to generate different UUID . Snapshot needs to be in RW access
esxcli storage vmfs snapshot resignature -u "aaaa-aaaa-aaaa-aaa"
# List of datastores with extends for each volume and mapping from device name to UUID
esxcli storage vmfs extent list
# To generate a compact list of the LUNs currently connected to the ESXi host, including VMFS version.
esxcli storage filesystem list
# Check locking mechanism
esxcli storage vmfs lockmode list
# Switch back from ATS to SCSI locking
esxcli storage vmfs lockmode set --scsi --volume-label=vmfs3
esxcli storage vmfs lockmode set --ats --volume-label=vmfs3
# Disable ATS and check status
esxcli system settings advanced list -o /VMFS3/UseATSForHBonVMFS5
# Disable ats
esxcli system settings advanced set -i 0 -o /VMFS3/UseATSForHBOnVMFS5
# Enable ATS
esxcli system settings advanced set -i 1 -o /VMFS3/UseATSForHBOnVMFS5
# check storage array multipathing
esxcli storage core path list
# Create fake SSD on iSCSI LUN
esxcli storage nmp device list
esxcli storage nmp satp rule add āsatp= VMW_SATP_ALUA ādevice naa.6006016015301d00167ce6e2ddb3de11 āoption āenable_ssdā # reboot of host is reqired
to see world that has device opened for lun, typically we need this for devices in PDL to find processes that are using this device
https://kb.vmware.com/s/article/2014155
# esxcli storage core device world list -d naa.6006048c870bbed5047ce8d51a260ad1
Device World ID Open Count World Name
------------------------------------ -------- ---------- ------------
naa.6006048c870bbed5047ce8d51a260ad1 32798 1 idle0
naa.6006048c870bbed5047ce8d51a260ad1 32858 1 helper14-0
naa.6006048c870bbed5047ce8d51a260ad1 32860 1 helper14-2
naa.6006048c870bbed5047ce8d51a260ad1 32937 1 helper26-0
# WWID from RDM disk
RDM disk ID > vml.0200100000600601601fc04500c260d45af966c4f9565241494420
ls -alh /vmfs/devices/disks/ | grep 0200100000600601601fc04500c260d45af966c4f9565241494420
lrwxrwxrwx 1 root root 36 Dec 19 10:27 vml.0200100000600601601fc04500c260d45af966c4f9565241494420 -> naa.600601601fc04500c260d45af966c4f9
600601601fc04500c260d45af966c4f9 is reflecting to WWI
ESXCLI Networking Commands
# List vmkernel ports - get IPv4 addresses
esxcli network ip interface list
#To get IPv4 addresses details:
esxcli network ip interface ipv4 get
Name IPv4 Address IPv4 Netmask IPv4 Broadcast Address Type Gateway DHCP DNS
---- -------------- --------------- --------------- ------------ ------------- --------
vmk0 192.168.198.21 255.255.255.0 192.168.198.255 STATIC 192.168.198.1 false
vmk1 172.30.1.167 255.255.255.224 172.30.1.191 STATIC 0.0.0.0 false
#List current routing configuration using the following command:
esxcli network ip route ipv4 list
#The command syntax for adding and removing an additional route is:
esxcli network ip route ipv4 add/remove
eg:
esxcli network ip route ipv4 add -n 172.16.20.0/24 -g 10.10.100.110
# Check Jumbo frame ping result - no data fragmentation.
MTU setting of 9000 (minus 28 bytes for overhead) to another ESXi host
vmkping -I vmk1 172.30.1.168 -d -s 8972
PING 172.30.1.168 (172.30.1.168): 8972 data bytes
sendto() failed (Message too long)
# list network stacks
esxcli network ip netstack list
defaultTcpipStack
Key: defaultTcpipStack
Name: defaultTcpipStack
State: 4660
# Create new standard vSwitch
esxcli network vswitch standard add --vswitch-name=vSwitchVmotion
# List physical adapters
esxcli network nic list
Name PCI Device Driver Admin Status Link Status Speed Duplex MAC Address MTU Description
------ ------------ -------- ------------ ----------- ----- ------ ----------------- ---- -----------------------------------------------
vmnic0 0000:0b:00.0 nvmxnet3 Up Up 10000 Full 00:50:56:98:cf:ab 1500 VMware Inc. vmxnet3 Virtual Ethernet Controller
vmnic1 0000:13:00.0 nvmxnet3 Up Up 10000 Full 00:50:56:98:37:94 1500 VMware Inc. vmxnet3 Virtual Ethernet Controller
vmnic2 0000:1b:00.0 nvmxnet3 Up Up 10000 Full 00:50:56:98:bc:bd 1500 VMware Inc. vmxnet3 Virtual Ethernet Controller
# Assign uplink vmnic2 to vSwitch vSwitch Vmotion
esxcli network vswitch standard uplink add --uplink-name=vmnic2 --vswitch-name=vSwitchVmotion
# migrate vmotion vmkernel service to different vSwitch
esxcli network vswitch standard portgroup add --portgroup-name=Vmotoin --vswitch-name=vSwitchVmotion
#list vmkernel adapters
esxcli network ip interface list
#VMwareās netstat
esxcli network ip connection list
#create vmk interface in Vmotoin portgroup
esxcli network ip interface add --interface-name=vmk2 --portgroup-name=Vmotion
# assign static IP to vmkernel port
esxcli network ip interface ipv4 set --interface-name=vmk2 --ipv4=192.168.200.21 --netmask=255.255.255.0 --type=static
#enable vmotion on vmk2
vim-cmd hostsvc/vmotion/vnic_set vmk2
# disable vmotion on vmk0
vim-cmd hostsvc/vmotion/vnic_unset vmk0
#Adding a static route to ESXi
esxcfg-route -a 172.16.11.0/24 192.168.0.26
# Remove vmkernel
esxcli network ip interface remove --interface-name vmk2
#List route
~ # esxcfg-route
VMkernel default gateway is 172.16.10.253
# List all routes using -l
esxcfg-route -l
VMkernel Routes:
Network Netmask Gateway Interface
10.50.0.0 255.255.255.0 Local Subnet vmk0
1.1.1.0 255.255.255.0 Local Subnet vmk2
default 0.0.0.0 10.50.0.252 vmk1
# list dvSwitches
esxcli network vswitch dvs vmware list
# Add custom netstack
esxcli network ip netstack add -N "CustomNetstack"
# Check NIC link status
esxcli network nic list
# Enable/disable single uplink
esxcli network nic down -n vmnicX
esxcli network nic up -n vmnicX
# check single uplink details
esxcli network nic get -n vmnic6
Advertised Auto Negotiation: true
Advertised Link Modes: Auto, 1000BaseT/Full, 100BaseT/Full, 100BaseT/Half, 10BaseT/Full, 10BaseT/Half
Auto Negotiation: false
Cable Type: Twisted Pair
Current Message Level: 0
Driver Info:
Bus Info: 0000:04:00:2
Driver: igbn
Firmware Version: 1.70.0:0x80000f44:1.1904.0
Version: 1.5.2.0
Link Detected: false
Link Status: Down by explicit linkSet
Name: vmnic6
PHYAddress: 0
Pause Autonegotiate: true
Pause RX: true
Pause TX: true
Supported Ports: TP
Supports Auto Negotiation: true
Supports Pause: true
Supports Wakeon: true
Transceiver: internal
Virtual Address: 00:50:56:59:63:27
Wakeon: MagicPacket(tm)
SYSTEM
# syslog check config
esxcli system syslog config get
Check Certificate Revocation: false
Default Network Retry Timeout: 180
Dropped Log File Rotation Size: 100
Dropped Log File Rotations: 10
Enforce SSLCertificates: true
Local Log Output: /scratch/log
Local Log Output Is Configured: false
Local Log Output Is Persistent: true
Local Logging Default Rotation Size: 1024
Local Logging Default Rotations: 8
Log To Unique Subdirectory: true
Message Queue Drop Mark: 90
Remote Host: udp://192.168.98.10:514
Strict X509Compliance: false
# configure syslog
restart syslog service to apply configuration changes
esxcli system syslog reload
esxcli system syslog config set --loghost=udp://192.168.198.10:514
if you are using vcenter as a syslog collector logs are located in /var/log/vmware/esx
vSAN
st vsan network interfaces
esxcli vsan network list
localcli vsan network list
Interface:
VmkNic Name: vmk2
IP Protocol: IP
Interface UUID: 02c4905c-0117-2e99-9ecb-48df373682cc
Agent Group Multicast Address: 224.2.3.4
Agent Group IPv6 Multicast Address: ff19::2:3:4
Agent Group Multicast Port: 23451
Master Group Multicast Address: 224.1.2.3
Master Group IPv6 Multicast Address: ff19::1:2:3
Master Group Multicast Port: 12345
Host Unicast Channel Bound Port: 12321
Multicast TTL: 5
Traffic Type: vsan
Interface:
VmkNic Name: vmk0
IP Protocol: IP
Interface UUID: bef8905c-1bf7-d03d-a07e-48df373682cc
Agent Group Multicast Address: 224.2.3.4
Agent Group IPv6 Multicast Address: ff19::2:3:4
Agent Group Multicast Port: 23451
Master Group Multicast Address: 224.1.2.3
Master Group IPv6 Multicast Address: ff19::1:2:3
Master Group Multicast Port: 12345
Host Unicast Channel Bound Port: 12321
Multicast TTL: 5
Traffic Type: witness
check what happen if we put the host into Maintenance with no Action option
localcli vsan debug evacuation precheck -e 5c90b59b-6dd0-fdfa-5161-48df373682cc -a noAction
Action: No Action:
Evacuation Outcome: Success
Entity: Host TRGEBSVMH02.TRGEBCSN.ra-int.com
Data to Move: 0.00 GB
Number Of Objects That Would Become Inaccessible: 0
Objects That Would Become Inaccessible: None
Number Of Objects That Would Have Redundancy Reduced: 69
Objects That Would Have Redundancy Reduced: (only shown with --verbose option)
Additional Space Needed for Evacuation: N/A
Remove one of vsan interfaces once we migrated to new one
esxcli vsan network remove -i vmk3
vsan health report from localcli
localcli vsan health cluster list
get cluster UUID and members
esxcli vsan cluster get
get local node UUID
cmmds-tool whoami
5bb4cf73-6e2f-dfaa-e1b0-9cdc71bb4ed0
get vsan unicast agent list
localcli vsan cluster unicastagent list
NodeUuid IsWitness Supports Unicast IP Address Port Iface Name
5bb4d4a0-8acc-c374-e5fe-d06726d34248 0 true 172.30.1.168 12321
00000000-0000-0000-0000-000000000000 1 true 172.30.1.32 12321
add unicast agent by hand
esxcli vsan cluster unicastagent add -t node -u 5bb4cf73-6e2f-dfaa-e1b0-9cdc71bb4ed0 -U true -a 172.30.1.167 -p 12321
create folder on vsan
/usr/lib/vmware/osfs/bin/osfs-mkdir /vmfs/volumes/vsan:5223b097ec01c8f5-ca8bf9d261f6796e/some_folder
vsan object summary health
localcli vsan debug object health summary get
Health Status Number Of Objects
reduced-availability-with-active-rebuild 0
data-move 0
nonavailability-related-incompliance 0
reduced-availability-with-no-rebuild 45
inaccessible 74
healthy 1
reduced-availability-with-no-rebuild-delay-timer 16
nonavailability-related-reconfig 0
List of vsan disks
vdq -Hi
vdq -Hi
Mappings:
DiskMapping[0]:
SSD: naa.51402ec011e50cc7
MD: naa.5000c500a722db23
MD: naa.5000039928481915
MD: naa.5000039928481839
MD: naa.50000399284814c5
MD: naa.50000399284818c5
MD: naa.5000c500a723a283
MD: naa.5000039928481749
check if vsan disk are operational
esxcli vsan storage list | grep -i Cmmds
list of vsan disk and disk group membership
esxcli vsan storage list | grep -i Uuid
VSAN UUID: 5230295f-ead5-ee2b-c3ae-1edef5135985
VSAN Disk Group UUID: 52487975-68ed-8ebb-268e-f54ba9358941
VSAN UUID: 523af961-9357-eca0-4e1b-4ada8805984d
VSAN Disk Group UUID: 52487975-68ed-8ebb-268e-f54ba9358941
vSAN Congestion
Congestion is a feedback mechanism to reduce the rate of incoming IO requests from the vSAN #DOM client layer to a level that the vSAN disk groups can service
To understand if our ESXi is having vSAN congestion you can run the following scritp (per host)
for ssd in $(localcli vsan storage list |grep "Group UUID"|awk '{print $5}'|sort -u);do echo $ssd;vsish -e get /vmkModules/lsom/disks/$ssd/info|grep Congestion;done
vSAN metrics
Slab Congestion: This originates in vSAN internal operation slabs. It occurs when the number of inflight operations exceed the capacity of operation slabs.
Comp Congestion: This occurs when the size of some internal table used for vSAN object components is exceeding threshold.
SSD Congestion: This occurs when the cache tier disk write buffer space runs out.
Log Congestion: This occurs when vSAN internal log space usage in cache tier disk runs out.
Mem Congestion: This occurs when the size of used memory heap by vSAN internal components exceed the threshold.
IOPS Congestion: IOPS reservations/limits can be applied to vSAN object components. If component IOPS exceed reservations and disk IOPS utilization is 100.
se the following commands under your responsability. If your vSAN report LSOM errors in vSAN logs, these metrics can be changed to reduce vSAN congestion.
esxcfg-advcfg -s 16 /LSOM/lsomLogCongestionLowLimitGB ā (default 8).
esxcfg-advcfg -s 24 /LSOM/lsomLogCongestionHighLimitGB ā (default 16).
esxcfg-advcfg -s 10000 /LSOB/diskIoTimeout
esxcfg-advcfg -s 4 /LSOB/diskIoRetryFactor
esxcfg-advcfg -s 32768 /LSOM/initheapsize ā Seems this command is not available in vShere 6.5 + vSAN 6.6
esxcfg-advcfg -s 2048 /LSOM/heapsize ā ā Seems this command is not available in vShere 6.5 + vSAN 6.6
Official VMware KB:
https://kb.vmware.com/s/article/2150260
https://kb.vmware.com/s/article/2071384
https://kb.vmware.com/s/article/2149096
Whatās the current size of the LLOG and PLOG:
for ssd in $(localcli vsan storage list |grep "Group UUID"|awk '{print $5}'|sort -u);do \
llogTotal=$(vsish -e get /vmkModules/lsom/disks/$ssd/info|grep "Log space consumed by LLOG"|awk -F \: '{print $2}'); \
plogTotal=$(vsish -e get /vmkModules/lsom/disks/$ssd/info|grep "Log space consumed by PLOG"|awk -F \: '{print $2}'); \
llogGib=$(echo $llogTotal |awk '{print $1 / 1073741824}'); \
plogGib=$(echo $plogTotal |awk '{print $1 / 1073741824}'); \
allGibTotal=$(expr $llogTotal + $plogTotal|awk '{print $1 / 1073741824}'); \
echo $ssd;echo " LLOG consumption: $llogGib"; \
echo " PLOG consumption: $plogGib"; \
echo " Total log consumption: $allGibTotal"; \
done
advanced configuration values in vSAN:
esxcfg-advcfg -g /LSOM/lsomSsdCongestionLowLimit
esxcfg-advcfg -g /LSOM/lsomSsdCongestionHighLimit
These two commands output the threshold values for memory. And with the following commands you can see the threshold values for SSD and Log congestion:
esxcfg-advcfg -g /LSOM/lsomMemCongestionLowLimit
esxcfg-advcfg -g /LSOM/lsomMemCongestionHighLimit
esxcfg-advcfg -g /LSOM/lsomLogCongestionLowLimitGB
esxcfg-advcfg -g /LSOM/lsomLogCongestionHighLimitGB
vSAN resync report
/localhost/Datacenter1/computers/Cluster1> vsan.resync_dashboard .
list of VSAN objects with vSAN UUID
/127.0.0.1/Cluster/computers/vSAN_Cluster> vsan.obj_status_report -t .
VMware ESXi Diagnostic Logs
VMkernel
VMkernel contains records activities related to virtual machines andĀ ESXi.
File Location/directory: /var/log/vmkernel.log
VMkernel warnings contains records activities related to virtual machines.
File Location/directory: /var/log/vmkwarning.log
VMkernel Summary - It is used to determine uptime and availability statistics forĀ ESXi.
File Location/directory: /var/log/vmksummary.log
ESXi Host Agent Log
ESXi Host Agent Log contains information about the agent that manages and configures theĀ ESXiĀ host and its virtual machines.
File Location/directory: /var/log/hostd.log
vCenter Agent Log
It contains information about the agent that communicates with vCenter Server
File Location/directory: /var/log/vpxa.log
Shell log contains a record of all commands typed into theĀ ESXi Shell and related events.
File Location/directory: /var/log/shell.log
AuthenticationĀ Logs
It contains all events related to authentication for the local system.
File Location/directory:/var/log/auth.log
System Messages
It contains all general log messages and can be used for troubleshooting.
File Location/directory:/var/log/syslog.log
Virtual Machines
It contains virtual machine power events, system failure information, tools status and activity, time sync, virtual hardware changes, vMotion migrations, machine clones, and so on. You can the same virtual machines logs hosted on affected ESXi if required to know more on it.
File Location/directory:/vmfs/volumes/datastore/virtual machine/vwmare.log
(Visited 264 times, 1 visits today)