include/utime: do not cast sec to time_t 27861/head
authorKefu Chai <kchai@redhat.com>
Mon, 29 Apr 2019 12:48:30 +0000 (20:48 +0800)
committerKefu Chai <kchai@redhat.com>
Mon, 29 Apr 2019 13:11:52 +0000 (21:11 +0800)
strictly speaking, time_t is an opaque time, and in this context, we
need a uint32_t, so it is not necessary and wrong. let's remove it.

Signed-off-by: Kefu Chai <kchai@redhat.com>
src/include/utime.h

index f2f88b55e0c8a9c470cee6500c4f89f7e6a4bbf5..019eb7143a72e97909c78ecce8919bb6c8bd6c71 100644 (file)
@@ -83,8 +83,8 @@ public:
 
 #if defined(WITH_SEASTAR)
   explicit utime_t(const seastar::lowres_system_clock::time_point& t) {
-    tv.tv_sec = std::time_t(std::chrono::duration_cast<std::chrono::seconds>(
-        t.time_since_epoch()).count());
+    tv.tv_sec = std::chrono::duration_cast<std::chrono::seconds>(
+        t.time_since_epoch()).count();
     tv.tv_nsec = std::chrono::duration_cast<std::chrono::nanoseconds>(
         t.time_since_epoch() % std::chrono::seconds(1)).count();
   }