
- Determining the necessary equipment, materials, and manpower needed and keep track of inventory, tools and equipment.
- Preparing reports regarding job status and resolve any problems that may arise.
- Ensuring compliance with safety regulations and evaluate risks.
- Collaborating with subcontractors, engineers, architects and key team members of the project team.
- Planning construction operations and ensuring all deadlines are met.
- Allocating and managing resources to ensure that they are available whenever they are needed throughout the construction projects.
- Ensuring timely testing of construction material
- Ensuring Project Execution as per the design, drawings and specifications
- Coordinating with Chief Engineer for site related issues regarding resources and drawings, materials etc.
What you need to have:
- B.E./B.Tech/Diploma in Civil Engineering/ Project Management
- A minimum of 5-6 Years Civil Engineering experience
- Real estate industry experience preferred
- Exposure to High Rise Building/ Commercial Building/ Residential

Similar jobs
Role: SDET
Core Testing Competencies
- Strong understanding of STLC and Agile methodologies
- Experience in Integration, System, UAT, Regression, and Sanity testing
- Hands-on experience in E-commerce application testing
- API testing using tools like Postman / SoapUI
- Mobile testing on iOS and Android (real devices/emulators)
- Cross-browser web testing
Automation Skills
- Designing and maintaining automation frameworks:
- Page Object Model (POM)
- Data-Driven and Keyword-Driven frameworks
- Programming languages: Java, JavaScript
- Web automation: Puppeteer
- Mobile automation: Appium, WebdriverIO
- CI/CD integration using Jenkins
- Backend automation using JUnit (good to have)
Tools & Technologies
- Test management & documentation: Jira, Confluence
- Defect tracking: Jira, Excel
- Proxy tools: Charles
- Databases: SQL (MySQL, PostgreSQL, Oracle) for backend validation
- Browser debugging & network inspection: Chrome DevTools
Other Key Skills
- Test documentation and bug reporting
- RTM (Requirement Traceability Matrix)
- Strong analytical and problem-solving skills
Data - AI / ML Engineer
Full-Time | On-site, Kolkata | Immediate Joining | 4+ years experience
ABOUT US
Company Name: Freeflow ventures
We are a venture building and investment firm focused on emerging markets across India, the Middle East, and Africa. We work with early-stage startups -diagnosing gaps, structuring interventions, and preparing them for investor-readiness through a proprietary data and intelligence platform.
Our platform combines automated data verification, startup scoring, and structured workflow automation to bring consistency and credibility to early-stage investment decisions. We are at an active build and expansion phase, and this role sits at the core of that infrastructure.
ROLE OVERVIEW
We are looking for a Data - AI / ML Engineer who can own both the data pipelines that bring verified information into our platform and the intelligence models that turn that information into reliable startup scores.
This is a dual-responsibility role. You will be expected to build and maintain robust data infrastructure as well as develop, calibrate, and improve machine learning models. Both are equally important to the platform.
You will work closely with the Platform Owner and alongside a Backend Engineer who owns system integrations and workflow logic. Your work produces the scored intelligence output. The Backend Engineer's work connects that output to platform actions. The two roles are tightly interdependent and require close daily collaboration, especially in the first 30 days.
Note: You are the first technical hire on the platform team. The Backend Engineer joins the same week. Clear communication, well-defined handoff points, and shared documentation between the two of you are non-negotiable from Day 1.
WHAT YOU WILL DO
Data Pipeline
- Build and maintain pipelines that collect, clean, and normalize data from multiple external sources into a consistent, usable format
- Design connector architecture that allows individual data sources to be added, swapped, or removed without rebuilding the entire pipeline
- Implement automated data quality checks that catch bad data before it reaches the scoring layer -anomaly detection, constraint enforcement, and schema validation
- Build an automated eligibility screening system that verifies whether a startup has sufficient verified data before assessment begins
- Ensure the pipeline is resilient -critical data signals must have backup sources so a single vendor failure does not disrupt platform output
- Structure data storage to support different regulatory requirements across multiple countries -data from different regions must be handled according to the rules of that region
AI and Machine Learning
- Audit the existing scoring engine before making any changes -understand what it does, how it was built, and what would be lost if it were modified
- Calibrate scoring models against real portfolio data so that scores are meaningful, consistent, and comparable across different startup types and stages
- Build confidence scoring logic that determines when the system is certain enough to act autonomously and when it should route to human review
- Ensure every model output is explainable -investors must be able to see exactly which data points drove a score, not just the final number
- Build a feedback loop so that real-world outcomes feed back into the model over time, making it progressively more accurate
- Maintain a structured data store of assessment outputs and outcomes that the model uses to improve
Working With the Backend Engineer
- Define a clear data contract at the handoff point -what data you produce, in what format, and what the Backend Engineer can expect to receive
- Collaborate on trigger logic -what score thresholds or confidence drops should fire what system actions
- Align on data schema requirements so that the APIs the Backend Engineer builds conform to the structure your pipeline produces
- Communicate blockers early -the pipeline and backend system are built simultaneously, so delays on one side directly affect the other
- Document everything you build so the Backend Engineer and Platform Owner can understand, debug, and extend it without depending on you for every question
WHAT WE ARE LOOKING FOR
Skills are divided into two categories. Must Have means the role cannot function without it. Good to Have means it gives you an edge.
Skill
Priority
Data Engineering
Building and maintaining data pipelines from multiple sources
Must Have
Data normalization and schema design
Must Have
Automated data quality validation
Must Have
API integration across different source types
Must Have
Pipeline orchestration and scheduling
Must Have
Cloud infrastructure -storage, compute, and deployment
Must Have
Version control and code documentation
Must Have
AI and Machine Learning
Building and calibrating supervised machine learning models
Must Have
Model explainability -making model outputs traceable and interpretable
Must Have
Confidence scoring and threshold calibration
Must Have
Experiment tracking and model versioning
Must Have
Building feedback loops that improve models over time using real-world outcomes
Must Have
Natural language processing or document understanding
Good to Have
Vector databases and semantic search
Good to Have
Collaboration and Context
Ability to define clear data contracts and handoff points with backend engineers
Must Have
Clear written documentation of pipeline logic, model decisions, and failure modes
Must Have
Prior experience working in or with early-stage startups
Good to Have
Exposure to financial data, investment platforms, or data verification systems
Good to Have
WHAT WE OFFER
- Competitive compensation based on experience -discussed during the interview process
- Ownership of both the data and intelligence layers from Day 1 -this is not a support or maintenance role
- Direct access to the Platform Owner and Founder
- Close collaboration with a Backend Engineer from Day 1 -the two roles are designed to work as a unit
- Work on a genuinely novel problem in an emerging market context
- On-site Kolkata with a small, high-accountability team
- Opportunity to scale the platform across multiple international markets
Optimize website content, structure, and performance.
- Conduct keyword research and implement on/off-page SEO.
- Manage and optimize PPC campaigns on platforms like Google Ads.
- Create, schedule, and manage engaging social media content.
- Monitor, measure, and analyze digital marketing performance using tools like Google Analytics.
- Keep up-to-date with industry trends and best practices.
We are seeking a dedicated and skilled AI Project Field Engineer to join our team. The successful candidate will be responsible for executing AI projects on-site, ensuring the seamless deployment and operation of AI models and systems. This role requires a combination of technical expertise, problem-solving skills, and a strong customer focus.
Responsibilities:
- Execute and manage AI projects on customer sites, ensuring timely and successful deployment.
- Deploy and run AI models using PyTorch on various hardware configurations.
- Set up and maintain computer networks, particularly those involving IP cameras.
- Write and maintain shell scripts to automate deployment and monitoring tasks.
- Develop and troubleshoot Python code related to AI models and their deployment.
- Collaborate with customers to understand their needs and ensure their success with our AI solutions.
- Perform on-site visits as required to install, test, and troubleshoot AI systems.
- Provide training and support to customers on the use and maintenance of deployed AI systems.
- Work closely with the development team to provide feedback and insights from the field.
- Document all processes, configurations, and customer interactions for future reference.
Customer Support at Contlo
Contlo is a pioneering AI native marketing platform that empowers modern, fast-growing businesses to leverage their brand's generative AI Model for end-to-end marketing optimization. Our platform enables businesses to drive customer retention through personalised campaigns and automated customer journeys across various channels, including Email, SMS, WhatsApp, Web push, and Social media.
With Contlo's Brand AI ModelTM, businesses can orchestrate all their brand marketing activities by generating personalised marketing creatives and copies, creating behaviour-based customer segments, and auto-generating customer journeys. As the Brand AI ModelTM is utilized, it continuously learns, improving marketing outcomes and enhancing sales performance.
At Contlo, we are looking for a Customer Support Specialist to assist our customers with technical problems when using our products and services.
Customer Support Specialist responsibilities include resolving customer queries, recommending solutions and guiding product users through features and functionalities. To succeed in this role, you should be an excellent communicator who can earn our clients’ trust. You should also be familiar with help desk software.
Ultimately, you will help establish our reputation as a company that offers excellent customer support during all sales and after-sales procedures.
Responsibilities-
- ● Respond to customer queries in a timely and accurate way, via phone, email or chat
- ● Identify customer needs and help customers use specific features
- ● Analyze and report product malfunctions (for example, by testing different scenarios or
- impersonating users)
- ● Update our internal databases with information about technical issues and useful
- discussions with customers
- ● Monitor customer complaints on social media and reach out to provide assistance
- ● Share feature requests and effective workarounds with team members
- ● Inform customers about new features and functionalities
- ● Follow up with customers to ensure their technical issues are resolved
- ● Gather customer feedback and share with our Product, Sales and Marketing teams
- ● Assist in training junior Customer Support Representatives
Requirements and skills-
- ● Experience as a Customer Support Specialist or similar CS roles
- ● Familiarity with our IT SaaS industry is a must
- ● Experience using help desk software and remote support tools
- ● Understanding of how CRM systems work
- ● Excellent communication and problem-solving skills
- ● Multi-tasking abilities
- ● Patience when handling tough cases
- ● B.Tech/BCA degree
About Us:
We’re a team of finance and technology enthusiasts who enjoy revolutionizing the investment industry through digital products & services. We’re building the next generation investment management platform for our financial professional customers so they can build better investment portfolios & help their clients retire in style.
If you're looking for challenging work, smart colleagues, and a global employer with a social conscience, come explore your potential at Invesco. Make a difference every day!
Responsibilities:
- Design, develop, test, deploy and maintain highly performant api-driven web applications on our stack (Angular, React, SASS, Java SpringBoot, Mulesoft).
- Design and implement REST APIs to industry/company standards
- Work with our amazing product design team to develop and iterate on user interfaces that bring simplicity to complicated financial data.
- Interact with engineering members across the organization to ensure consistency in engineering practices and foster active exchange of ideas
- We have development centers in Atlanta, Houston, New York City and India and the ability to collaborate across a global organization is a key skill.
- Perform peer code reviews. Review performance, security and flexibility of the code.
- Participating in agile ceremonies (e.g. daily standup, release and sprint planning, demos, scrum of scrums).
- Work with Architect to define technical roadmaps
- Participate and help to evangelist and promote enterprise solutions with business and technology partners
The Experience You Bring:
- 2+ years experienced in front-end frameworks, such as Angular and React
- 3+ years experience in back-end technologies, such as Node.JS, ExpressJS, or Java Springboot
- 1+ years experience with Mulesoft or API development
- Experience with core AWS services
- Experience working in an Agile team and environment
- Familiarity with software engineering support systems and tools, such as Git, Jenkins, Bamboo, Gulp, Bower, Maven, Log4j
- Familiar with SSO solutions, such as SAML, OAuth, OpenID a plus
- Knowledge of the Financial Services industry a plus
- Good to have:
- Experience with DevOps tools like Docker, Kubernetes, Jenkins, etc.
- Experience in developing microservices-based architectures
- Knowledge of frontend technologies like HTML, CSS, and JavaScript
- Experience with Python technologies
What you will do:
- Leading, planning and executing of high scale performance campaigns on multiple advertising platforms
- Strategizing end to end media plan using high budgets across digital and non-digital channels
- Collaborating with product and tech teams to provide the best user experience for new users
- Exploring new channels for growth and lead their end-to-end execution
- Diving deep into data and using insights to identify opportunities for performance optimisation
- Experimenting continuously on creatives and design to drive ROI
Desired Candidate Profile
What you need to have:- Bachelor’s degree or Master's degree (from tier 1 college IIT/ NIT/ IIM/ XLRI/ MDI/ Any Other Tier 1 Colleges)
- 7+ years experience with 4+ years experience in digital marketing at a high growth app-based B2C organisation
- Strong quantitative mindset and data driven approach to digital marketing
- Expert in excel-based data modelling, analytics tools
- Handled large scale monthly digital marketing budgets
- Hands-on experience in managing Facebook Ads, Google Ads at scale
- Experience in programmatic advertising is a big plus for this role
- Experience in managing TV budgets is a big plus for this role
- Experience in understanding data using Google Analytics/ Mixpanel/ AppsFlyer or any other analytics or attribution tool
- Experience in cross-team collaboration, team management and agency management
JD for IOT DE:
The role requires experience in Azure core technologies – IoT Hub/ Event Hub, Stream Analytics, IoT Central, Azure Data Lake Storage, Azure Cosmos, Azure Data Factory, Azure SQL Database, Azure HDInsight / Databricks, SQL data warehouse.
You Have:
- Minimum 2 years of software development experience
- Minimum 2 years of experience in IoT/streaming data pipelines solution development
- Bachelor's and/or Master’s degree in computer science
- Strong Consulting skills in data management including data governance, data quality, security, data integration, processing, and provisioning
- Delivered data management projects with real-time/near real-time data insights delivery on Azure Cloud
- Translated complex analytical requirements into the technical design including data models, ETLs, and Dashboards / Reports
- Experience deploying dashboards and self-service analytics solutions on both relational and non-relational databases
- Experience with different computing paradigms in databases such as In-Memory, Distributed, Massively Parallel Processing
- Successfully delivered large scale IOT data management initiatives covering Plan, Design, Build and Deploy phases leveraging different delivery methodologies including Agile
- Experience in handling telemetry data with Spark Streaming, Kafka, Flink, Scala, Pyspark, Spark SQL.
- Hands-on experience on containers and Dockers
- Exposure to streaming protocols like MQTT and AMQP
- Knowledge of OT network protocols like OPC UA, CAN Bus, and similar protocols
- Strong knowledge of continuous integration, static code analysis, and test-driven development
- Experience in delivering projects in a highly collaborative delivery model with teams at onsite and offshore
- Must have excellent analytical and problem-solving skills
- Delivered change management initiatives focused on driving data platforms adoption across the enterprise
- Strong verbal and written communications skills are a must, as well as the ability to work effectively across internal and external organizations
Roles & Responsibilities
You Will:
- Translate functional requirements into technical design
- Interact with clients and internal stakeholders to understand the data and platform requirements in detail and determine core Azure services needed to fulfill the technical design
- Design, Develop and Deliver data integration interfaces in ADF and Azure Databricks
- Design, Develop and Deliver data provisioning interfaces to fulfill consumption needs
- Deliver data models on Azure platform, it could be on Azure Cosmos, SQL DW / Synapse, or SQL
- Advise clients on ML Engineering and deploying ML Ops at Scale on AKS
- Automate core activities to minimize the delivery lead times and improve the overall quality
- Optimize platform cost by selecting the right platform services and architecting the solution in a cost-effective manner
- Deploy Azure DevOps and CI CD processes
- Deploy logging and monitoring across the different integration points for critical alerts
| Experience in Core PHP. |
| Experience in Laravel Frameworks. |
| Understanding of open source eCommerce platforms like Woocommerce, Magento. |
| Experience in REST API's integrations. |
| Basic understanding of front-end technologies, such as JavaScript, HTML5, and CSS3 |












