## Computer Science: An Overview: Global Edition (12th Edition)

The dot-decimal notation is a presentation format for numerical data expressed as a string of decimal numbers each separated by a full stop. a. the solution is: Step $1 :$ we divide the hexadecimal number to an 8-bit by 8-bit as following: $$\begin{array}{c}{00001111} \\ {00001111}\end{array}$$ Step $2 :$ convert each 8-bit to its equivalent decimal representations, as shown below: $$\begin{array}{l}{00001111 \rightarrow 15} \\ {00001111 \rightarrow 15}\end{array}$$ Step $3 :$ concatenate decimal numbers by dot, as shown below: $$15.15$$ ---------------------------------- b. the solution is: Step $1 :$ we divide the hexadecimal number to an 8-bit by 8-bit as following: $$\begin{array}{c}{00110011} \\ {00000000} \\ {10000000}\end{array}$$ Step $2 :$ convert each 8-bit to its equivalent decimal representation, as shown below: $$\begin{array}{l}{00110011\rightarrow 51} \\ {00000000\rightarrow 0} \\ {10000000\rightarrow 128}\end{array}$$ Step $3 :$ concatenate decimal numbers by dot, as shown below: $$51.0.128$$ ---------------------------------- c. the solution is: Step $1 :$ we divide the hexadecimal number to an 8-bit by 8-bit as following: $$\begin{array}{c}{00001010} \\ {10100000}\end{array}$$ Step $2 :$ convert each 8-bit to its equivalent decimal representations, as shown below: $$\begin{array}{l}{00001010\rightarrow 10} \\ {10100000\rightarrow 160}\end{array}$$ Step $3 :$ concatenate decimal numbers by dot, as shown below: $$10.160$$